Hi Bence, thanks for the help again.
My DRS is based on the U-Pb DRS and includes baseline subtraction, calculation of an isotopic ratio, downhole fractionation correction, and calculation of a final result based on this ratio.
Bence added the grouping of reference material blocks based on the longer time between each standard analysis of one block compared to the shorter time within one block. I adapted the code to function more flexible.
First of all the code requires the import of several libraries:
import numpy as np
import itertools
from datetime import datetime
Then this is the part of the code handling the grouping and assignment of selection midtimes
`rms = [settings['ReferenceMaterial']] # Can also just be the name of your reference material
groups = [data.selectionGroup(rm) for rm in rms if rm]
selections = list(itertools.chain(*[sg.selections() for sg in groups]))
if len(selections) == 0:
print('No selections to find blocks for!')
raise RuntimeError('No selections to find blocks for!')
Create a list of component selections
component_sels = list(itertools.chain(*[s.linkedSelections() for s in selections if s.hasLinks()]))
Now kick them out
selections = list(filter(lambda s: s.property('UUID') not in component_sels, selections))
def sel_sorter(sel):
if not sel.isLinked():
return sel.midTimestamp
else:
return sel.linkedMidTimestamp()
selections.sort(key=sel_sorter)
selMidTimes = [s.midTimestamp if not s.isLinked() else s.linkedMidTimestamp() for s in selections]
diffs = np.column_stack( (range(len(selMidTimes)), np.insert(np.diff(selMidTimes), 0, 0)))
cutoff = np.mean(diffs[:, 1])
current_label = 1
labels = []
for i, v in enumerate(diffs[:, 1]):
if v > cutoff:
current_label += 1
labels.append(current_label)
block_selections = {}
for k in np.unique(labels):
matches = np.where(np.array(labels).astype(int) == k)[0]
ind = [i for i in matches]
block_selections[k] = list(np.array(selections)[ind])
# Now we have the blocks, we can calculate the block means and midtimes
block_means = []
block_midtimes = []
for b in block_selections.keys():
print(f'Block {b}')
sels = block_selections[b]
for s in sels:
print(f'\t{s.name}')
block_midtime = np.mean([s.midTimestamp if not s.isLinked() else s.linkedMidTimestamp() for s in sels])
print(f'Block midtime: {block_midtime}: {datetime.fromtimestamp(block_midtime).strftime("%Y-%m-%d %H:%M:%S")}')
block_midtimes.append(block_midtime)
# Change this to the channel you want to average
channel = 'Name of your timeseries'
ts = data.timeSeries(channel)
block_vals = [data.result(s, ts).value() for s in sels]
block_mean = np.mean(block_vals, axis=0)
print(f'Block mean: {block_mean}')
block_means.append(block_mean)`
Now that the blocks are created you can also verify that this part of the code worked properly, open the python command window (SHIFT + CTRL + P) to see which selections have been grouped in blocks.
The next step would be to use the block_mean and block_midtimes to interpolate the data over the index time to obtain either a timeseries or use its spline to do any further calculation such as standard-bracketing.
interp_means = np.interp(indexChannel.time(), block_midtimes, block_means, left=block_means[0], right=block_means[-1])
data.createTimeSeries('BlockMeans', data.Intermediate, indexChannel.time(), interp_means, commonProps)
This will have created a time series so you can observe the data in the timeseries view for further confirmation.
If you prefer to use the spline of this timeseries you can further calculate the spline like this:
sp = data.spline('Name of your reference material', 'BlockMeans')
sp.data() # You can access the y data of the spline with this line
And that is all I added to get the wanted the result. After these steps I follow the examples given in the U-Pb DRS to standard-bracket my unknown analysis and calculate the final result.
Hope this helps anyone who has a similar question!