%Simulate error in magnitude due to error in flux measurements
% mag = -2.5*log10(f/F) where
%f is the spectral flux density
%F is the "zero point" flux density (a constant)
%Due to noise in detectors,etc the quantity f has
%errors. That is f=<f> +/- df (here <f> means mean value of f and df is the
%noise)
%We assume that noise is Gaussian with zero mean (mu=0) and sigma

$${\rm mag} = -2.5\log_{10}(f/F)$$
$$= -2.5\log_{10}((<f>+df)/F)$$
$$= -2.5\log_{10}(<f>/F) -2.5\log_{10}(1+df/<f>)$$

However, the first term is nothing but the mean magnitue, say "<mag>" The second term is the error in magnitude. What is plotted below is the sum of the first and second term.

    n=10000;    %number of variates
    meanmag=18; %we set "mag" to 18

    rms=[0.01 0.03 0.1 0.3];    %redo the simulations for several values of sigma
    m=numel(rms);

    for j=1:m

        figure(j)
        sigma=rms(j);

        OutFile=['MagError' num2str(sigma) '.pdf'];

        x=normrnd(0,sigma,1,n);
        mag=meanmag-2.5*log10(max((1+x),eps));  %looks complicated.
                                                %what is happening here?
                                                %eps is a small value in
                                                %MATLAB

        histogram(mag);
        xlim([meanmag-5*sigma meanmag+10*sigma]);
        str=sprintf('meanmag:%3.0f   sigma:%4.2f',meanmag,sigma);
        legend(str);
        title('Histogram: mag$-2.5\log10(1+x)$','Interpreter','LaTeX')

        print(OutFile,'-dpdf');

    end