A few of my RM measurements have sections that I have excluded by making a linked selection before and after inclusions are hit. However, when the downhole fractionation fits are calculated, the bad portion of the analysis is displayed - suggesting it is included in the average (thick red line). Is this intentional, and if so is there a way to exclude part of a RM analysis or do I have to delete the entire selection?

Also I have noticed that the "Quality of fit" statistics do not change when adjusting the start/end trim during manual fitting. It appears that the statistics include the start and end values that are often outliers. Is there a way to exclude these or is it just easier to go back and truncate the selection to exclude the last couple of slices?

  • Joe replied to this.

    antony_burnham

    It looks like this is a bug when using linked selections on the RM. I'll get that fixed for the next release. If you're able to, can you share your session to the support email so I have something to test with?

    Regarding the quality of fit -- I think that depending on the data and your reasons for cropping, you may or may not want to see the plots/calculations including the cropped parts. I'll see about adding an option for that.

    All the best,

    • Joe
      17 days later

      Joe 😃 Hi Joe, I also have a question for you about downhole fractionation calibration, When I was working on the zircon U-Pb data, the conditions I chose were as shown https://imgur.com/s00BZrb,And the final calibration result is shown in Fig. https://imgur.com/ijhHxRr,I would like to ask some questions about this, what does the thick red line in the middle , although I always thought it was the average value of the calibration fit, and what does the thick black line in the middle represent? And is the given signal curve only for the standard 91500? Finally, I would like to ask how my result is? I don't think the last histogram fits the normal distribution very well, because I changed the fit type, such as exponential plus linear or running median, the fitting results will be very different, as shown in the figure https://imgur.com/hKL01yK But I'm not very good at judging which one is more suitable for my sample, 😃

      • Joe replied to this.

        chenhongjun

        The red line is the average of all of the reference material measurements and the black line is the result of your fit. In the first example, it appears to me that something has caused the automatic fit to fail -- you may want to adjust the fit type and/or the start+end crops. There is nothing necessarily wrong with the outcome of your running median test. Ultimately, I would trust whichever fit produces the best agreement for secondary reference materials.

        All the best,

        • Joe

          Joe

          In the first example, it appears to me that something has caused the automatic fit to fail -- you may want to adjust the fit type and/or the start+end crops.
          Yes, as you said, if the thick black line is the result of the fit, then my fit appears automatic fit to fail. I kept trying and found that no matter which fit type I used it didn't seem to fit very well, so I changed the Beam seconds method from gaps between samples to cut off threshold. I set the Beam seconds sensitivity to 20000 (although I am still not very clear on what the setting of this value is based on? I think it is based on the count value of 238U, I don't know if it is right) As shown in the figure https://imgur.com/kfRoavB The fitting method is exponential, and I found that the black line fits better, as shown in the figure https://imgur.com/Ob1uDRW Maybe I did it right? But what I'm more worried about is why the gaps between samples can't be fitted before, and how should we choose the different methods in the Beam seconds method, although I know this parameter is telling the software when the laser starts.
          Ultimately, I would trust whichever fit produces the best agreement for secondary reference materials.
          But I found that the running median is better in my fitting type because the black line and the red line fit better, https://imgur.com/PdXIzhP but according to your guide, trust whichever fit produces the best agreement for secondary reference materials, then Running median shows the best fitting effect, but the result of the secondary standard Plesovice is not as good as the exponential fit, and it is more scattered, as shown in the figure https://imgur.com/OutiwiH

          • Joe replied to this.

            chenhongjun

            When the beam seconds method is cutoff threshold, the value refers to the index channel. In your case the index channel is U238, so it will reset the beam seconds to 0 when U238 crosses 20,000. If you share your data with the support email, I can investigate why gaps between samples isn't working for your data.

            I can't say for sure without looking at your data, but my guess is that the downhole fit is mostly responsible ellipse size and the spline fit is mostly responsible for scatter. So you may want to investigate how changing the spline type for your primary RM changes the results.

            All the best,

            • Joe

              Joe I've sent my data to the support mailbox, what I don't understand is that while the exponential fit looks bad, the secondary reference materials has similar results and agreement as other fit types 😅

              • Joe replied to this.

                chenhongjun

                Thank you for sharing your data. I've had a look at it and here my thoughts:

                1. Beam seconds from sample gaps starts the beam seconds at 0 when the "sample" starts. Since you have background when the sample starts, t=0 does not correspond to laser on, and therefore the curvature due to downhole fractionation is much later. This cannot be fit well by the exp(-c*t) terms in the exponential fits.
                2. One of your 91500 selections is not 91500. NZ010-35 is not 91500.
                3. Beam seconds from cutoff works fine here.
                4. There is not a significant difference between using exponential (when properly fit using cutoff threshold for beam seconds), smoothing spline, or running median. All result in a mean age ~ 15 Ma too young for Plesovice and with significant scatter.
                5. You can improve the scatter situation by changing the spline type for 91500 to something less variable (e.g. a highly smoothed spline, mean, or linear), but the concordia age is still 15Ma too young.
                6. My guess for why it is too young is a slightly off detector cross calibration. 91500 U is 150kcps, Plesovice U is 2.2 Mcps. 2.2Mcps is likely above the pulse counting to analogue transition, but I don't know your instrument.

                All the best,

                • Joe

                  Joe 😃 First of all, thank you very much for your prompt response and answering the questions that have troubled me for a long time.It turns out that the experimental method we set up determines that we cannot use the gaps method to fit downhole fractionation.
                  2.One of your 91500 selections is not 91500. NZ010-35 is not 91500.
                  Yes, I also found this problem, because when I selected the integration interval, I found that there are many U and Pb peaks in 91500, which I thought would affect my data processing results,so in order to avoid selecting these spikes, I did not select the standard range by time, but by the average signal range of U or Pb in 91500, this function is in Tools-Automatic Selections, as shown in the figure https://imgur.com/4XjW9Km,and https://imgur.com/0PQNPo6.
                  However, selecting the integration interval in this way will cause new problems. For example, as you mentioned, the U content in some samples is just close to the standard sample range and will be mistakenly regarded as 91500.
                  So I don't know how to avoid being affected by the spike and not mistakenly select the sample as 91500 for the case where there is a spike in the 91500

                  • Joe replied to this.

                    chenhongjun

                    Where a spike has caused two selections to be made, you can delete one of them and extend the other to cover the full range manually. A single spike like that will surely be kicked out by the outlier rejection.

                    In the future, take care to make sure the sample labels are correct and then you can setup selections automatically from the data/samples tab with appropriate start / end (or durations) to avoid the background.

                      5 days later

                      Joe Thanks a lot for your answer! 😃 Do we have to delete one when the spike causes two selections in 91500?If there are two selections, what effect will it have on the data fit?I always thought the software wouldn't automatically kick out spikes by the outlier rejection.
                      you mentioned“take care to make sure the sample labels are correct and then you can setup selections automatically from the data/samples tab with appropriate start / end (or durations) to avoid the background.”
                      What do you mean by Sample labels here?In my automatic selections I can't tell the difference between the sample and the standard, only the file name, https://imgur.com/n2vN5Lp where am I setting it wrong? I'm very sorry to bring you a lot of questions 😀

                      • Joe replied to this.

                        chenhongjun

                        Having two selections like that won't hurt the downhole fit -- they will both contribute to their portion of the downhole average. I'd be more worried about having two selections so close to each other and trying to fit a spline to it. Selections that are very close together can cause wild splines. Outliers of that nature will be rejected by default. A larger inclusion might not be.

                        I think I was referring to some of the 91500 and Plesovice being mixed up. I was recommending that you use the data or log file metadata to create selections automatically rather than using the "Automatic selections from channels" tool to avoid issues where it is tricky to find criteria to match just the sample you're targeting. See the webinar here around 58 mins in where Bence talks about creating selections in the Data/Samples tab.

                        All the best,

                        • Joe
                          13 days later

                          Joe I was revising the paper some time ago, I am very sorry for taking so long to reply to you, thank you for your patience, thank you! 😃