All,
Do you want to read in raw Element *.dat files "on the fly" rather than waiting for the end of a sequence to generate *.FIN2 and FIN files? I have a solution for you. The code below extracts all the information stored in the FIN2 files from the Element's raw data files. As pointed out by Hartmann et al (2017) and Pullen et al (2018), there is a small disparity between the values in the FIN2 and ExtractDat processed files at very low and very high count rates. In cases, where the pulse counter is tripped intensities are calculated using AnalogACF. The ACF here is the value determined by the machine and saved in the scan header of the *.dat file. Post-processing the files for time-dependent and mass-dependent ACF values (see the Hartmann and Pullen papers) is beyond the scope of this parser.
This code reads and imports Thermo Element 2/XR binary data files into iolite. It works on single files and entire folders of data. Much of the binary data parsing algorithm is described in a paper by John Hartmann and others (2017). The .dat reader code here is my distillation of Hartmann's work. In addition, this parser extracts relevant information from the setup information (.inf) file - runs, passes, deadtime, and the string isotope identifiers. The *.dat file does not store these parameters, which are critical to importing data into iolite.
If you chose to import all the dat files in a folder, the directory
** should not contain both *.FIN2 and *dat files unless you disable the bundled Element *.FIN2 importer**
AUTOMATIC BASELINE / SAMPLE SELECTIONS
This code can be configured to automatically assign baseline and sample selections. I don't automatically assign reference materials because this can be done with a couple of clicks in the selections screen. To activate this functionality, you will need to change autoBaseSel, blOffset, blDuration, autoSmpSel, opoOffset,opDuration in the code. Furthermore, you may want to change SAMPLE_NUM_SEPARATOR in the code to be consistent with the sample numbering format in your lab. This regular expression allows recognition of both the "group name" (i.e. name of the sample) and the sample number.
OTHER FEATURES
PULSE_THRESHOLD: This allows the end-user to choose a lower pulse-counting threshold for a "tripped" value (I believe the pulse-count threshold on the Element is 5E+6). If PULSE_THRESHOLD is set to 4E+6. Pulse values above 4 Mcps will be replaced by Analog*ACF.
IGNORE_FARADAY: For Element 2s this should always be "True". I strongly recommend that this is kept as "True". Cross-calibration between faraday and count data using the FCF (faraday-to-count factor) is really complicated. Furthermore, unlike the pulse and analog detectors where signals are measured simultaneously, the conversion dynode has to be turn off in order to generate faraday data; this means that there is at least one channel of missing data. This is impractical for LA-ICP-MS analysis in my opinion. If your analysis will contain Faraday-sized signals, I would recommend using Faraday mode in your method rather than Triple mode.
INSTALLATION
Copy this code into a new *.py file in the Python Workspace in Iolite. Then save to the /Plug-Ins/Importers section of your Iolite4 folder.
BETA-TESTING
I have alpha tested this importer internally with data from our lab and those of other labs that have been sent to me. The importer has not seen extensive beta testing. That's where you come in.
PERFORMANCE
I have found that this importer has comparable performance to the *.FIN2 text importer. I haven't done any formal benchmarking. On-the-fly data reduction - not importer performance - is the principal objective of this code. I have experienced Iolite hanging during both the FIN2 and DAT import process when I try to do other things on my computer. Thus my recommendation is to start the importer and be patient. Immediately save your session upon successful import of the files.
DISCLAIMER
Because this importer accesses raw Element data before "evaluation" (post-processing) by the instrument manufacturer, it is unlikely that ThermoScientific would sanction this approach. So beware. While this importer may function and give the same time-series data as the *.FIN2 files, the developers of Iolite have - understandably - decided that this importer will not become part of the bundled importer suite. Read the GNU v.3 license terms and understand that this code is intended for public use without any warranty or liability on the original developer.
`# / python-based importer for iolite 4 starts with some metadata
/ Type: Importer
/ Name: Iolite Element *.dat importer
/ Authors: Jeremy Hourigan
/ Description: Imports Thermo Element *.dat files
/ References: None
/ Version: 1.0
/ Revision Date: 05/08/2022
'''
Python importer for reading Thermo Element *.dat files into Iolite 4
Copyright (C) 2022 Jeremy Hourigan
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
'''
Save this code as a IoliteDAT.py and place in the /Plug-ins/importers folder of your Iolit installation
For a description of the parsing routine, see: <https://github.com/jhh67/extractdat>
*.inf parsing algorithm was developed by Jeremy Hourigan and Rob Franks at UC Santa Cruz.
'''
import numpy as np
import pandas as pd
import re
import os
import datetime
from collections import defaultdict
import struct
import time
from iolite.QtCore import QDateTime
""" AUTOMATIC SELECTION PARAMETERS """
'''
For automatic assignment of baseline and on-peak selections at read time
'''
AUTOMATIC BASELINE SELECTION
set to True to activate automatic baseline selection
autoBaseSel = False
#Offset of baseline start from start of file
blOffset = 0
#Duration of baseline selection
blDuration = 15
AUTOMATIC "SAMPLE" SELECTION
set to True to activate automatic sample selection
autoSmpSel = False
#Offst of sample start from start of file
opOffset = 15.8
#Duration of sample selection
opDuration = 15
laserOnDelay = 15
""" BEGIN INSTRUMENT CONSTANTS """
#This is specific to UC Santa Cruz's naming strategy. Modify with Regex for your lab
SAMPLE_NUM_SEPARATOR = "[-_]" # Dash or underscore
MASS_NUMERIC_PRECISION = 0.5
Alter Pulse Threshold to reduce the pulse count rate that is considered "tripped" - therefore analog is used.
PULSE_THRESHOLD = 5E+6
Only applies to Element XR; Faraday cross-calibration is tricky. Not recommended for LA-ICP-MS.
IGNORE_FARADAY = True
""" BEGIN INSTRUMENT CONSTANTS """
MAG_DAC_BITS = 18 # ELEMENT XR @ UCSC
"""BEGIN CONSTANTS FOR INF PARSING"""
LEN_INF_FILE_ID = 64
INF File Bit Masks
MASK_TYP = 0x00000000000000FF
MASK_PTR = 0x00000000FFFFFF00
MASK_TOK = 0x000000FF00000000
MASK_FLG = 0x00000F0000000000
MASK_LEN = 0xFFFFF00000000000
INF File Data Keys
Record identifiers for important stored data
KEY_TIMESTAMP = 0x84
KEY_DEADTIME = 0xB8
KEY_RUNS = 0x99
KEY_MASSES = 0x98
KEY_MASS_ID = 0xC2
INF File Data Offsets
OFFSET_DATETIME = 0x84
OFFSET_FIELDS = 0x108
OFFSET_REGISTRY = 0x114
""" BEGIN CONSTANTS FOR DAT PARSING"""
LEN_DAT_FILE_ID = 16
DAT FILE MASKS
DAT_TYPE_MASK = 0xF0000000 # TYPE NIBBLE IN SCAN ITEM
DAT_DATA_MASK = 0x0FFFFFFF # DATA BITS IN SCAN ITEM
DATA_FLAG_MASK = 0x0F000000
DETECT_TYP_MASK = 0x00F00000
DATA_EXP_MASK = 0x000F0000
DATA_BASE_MASK = 0x0000FFFF
DAT FILE OFFSETS
OFFSET_HEADER_START = 0x94
OFFSET_HEADER_LENGTH = 0xAC
OFFSET_TIMESTAMP = 0xB0
DAT FILE KEYS
KEY_DWELL = 0x30000000 # MASS DWELL TIME
KEY_MAG = 0x20000000 # MAGNET MASS
KEY_MAGF = 0x40000000 # ACTUAL MASS
KEY_INTENSITY = 0x10000000 # INTENSITY
KEY_END_OF_MASS = 0x80000000 # END OF MASS
KEY_END_OF_SCAN = 0xF0000000 # END OF SCAN
KEY_PULSE = 0x00100000
KEY_ANALOG = 0x00000000
KEY_FARADAY = 0x00800000
EXP_SHIFT = 16
""" BEGIN IOLITE IMPORT REQUIRED FUNCTIONS """
def accepted_files():
return ".dat"
def correct_format():
"""
Checks for the correct starting text in both *.dat and *.inf files. Returns True if both are correct
"""
if not importer.fileName.endswith('dat'):
IoLog.debug(f"{importer.fileName} does not end with dat")
return False
IoLog.debug(f"Importer called on {importer.fileName}")
with open(importer.fileName, mode='rb') as dat:
datDesc = dat.read(LEN_DAT_FILE_ID)
dat_regex = "CHdrFile"
dat_regex = dat_regex.encode("utf-16-le")
if re.search(dat_regex, datDesc, re.MULTILINE):
# IoLog.debug(f"Correct DAT Format")
infPath = importer.fileName.replace(".dat", ".inf")
# IoLog.debug(f"Importer checking for {infPath}")
if os.path.exists(infPath):
with open(infPath, mode='rb') as inf:
infDesc = inf.read(LEN_INF_FILE_ID)
inf_regex = "This is a FINNIGAN/MAT INFO file"
inf_regex = inf_regex.encode("utf-16-le")
if re.search(inf_regex, infDesc, re.MULTILINE):
# IoLog.debug(f"Correct INF Format")
return True
else:
IoLog.debug(f"{infPath} file did not match the Element INF format.")
importer.message("Incorrect INF file format")
return False
else:
IoLog.debug(f"Setup file {infPath} file not found")
importer.message("INF file format not found")
return False
else:
IoLog.debug(f"The file {importer.fileName} did not match the Element DAT format.")
importer.message("Incorrect DAT file format")
return False
def import_data():
""" EXTRACTION OF SAMPLE AND GROUP NAMES """
basename = os.path.basename(importer.fileName)
datPath = importer.fileName
# IoLog.debug(f" DAT File Name: {datPath}")
infPath = importer.fileName.replace(".dat", ".inf")
# IoLog.debug(f" INF File Name: {infPath}")
sampleRegex = ".dat"
groupRegex = f"{SAMPLE_NUM_SEPARATOR}[0-9a-zA-Z]+.dat"
groupName = f"{re.split(groupRegex, basename)[0]}"
sampleName = f"{re.split(sampleRegex, basename)[0]}"
"""" DECLARE NEW INSTANCE OF SAMPLE RECORD CLASS """
smp = sampleData()
smp.updateNames(sampleName, groupName)
smp.updatePaths(datPath, infPath)
smp.updateTimes()
smp.updateMetadata()
smp.updateScans()
smp.createMassRecords()
smp.getMassScanValues()
smp.calculateMassIntensities()
""" LOAD PARSED SAMPLE RECORD DATA INTO IOLITE """
uploadToIolite(smp, laserOnDelay)
autoCreateSelections(smp, autoBaselines, autoSample) #True, True creates baseline and sample selections
""" IMPORTER HOUSEKEEPING """
importer.message("Finished")
importer.progress(100)
importer.finished()
""" BEGIN SAMPLE RECORD CLASS """
class sampleData:
def __init__(self):
self.name = ""
self.group = ""
self.isotopes = []
self.filePaths = {}
self.fileTimes = {}
self.metaData = {
"runs": 0,
"passes": 0,
"deadTime": 0,
"masses": 0,
"cycles": 0
}
self.masses = {}
self.scans = {
"ACF": [],
"FCF": [],
"EDAC": [],
"scanTime": []
}
def updateNames(self, sampleName, groupName):
self.name = sampleName
self.group = groupName
def updatePaths(self, datPath, infPath, debug=False):
self.filePaths["DAT"] = datPath
self.filePaths["INF"] = infPath
Paths = getDatFilePaths(datPath)
self.filePaths["MET"] = Paths["MET"]
self.filePaths["TPF"] = Paths["TPF"]
if debug:
IoLog.debug("Paths updated in sample record")
def updateTimes(self, debug=False):
self.fileTimes["DAT"] = getDatTimestamp(self.filePaths["DAT"])
self.fileTimes["INF"] = getInfTimestamp(self.filePaths["INF"])
if debug:
IoLog.debug("File times updated in sample record")
def updateMetadata(self, debug=False):
infPath = self.filePaths["INF"]
infHdr = getInfHeader(infPath)
tmp = getInfRunsPasses(infPath, infHdr)
self.metaData["runs"] = tmp["runs"]
self.metaData["passes"] = tmp["passes"]
self.metaData["deadTime"] = getInfDeadTime(infPath, infHdr)
self.metaData["masses"] = getInfMasses(infPath, infHdr)
self.metaData["cycles"] = tmp["runs"] * tmp["passes"]
if debug:
IoLog.debug("Metadata updated in sample record")
def createMassRecords(self, debug=False):
infPath = self.filePaths["INF"]
infHdr = getInfHeader(infPath)
infIsotopes = getInfIsotopes(infPath, infHdr)
self.isotopes = infIsotopes
for isotope in infIsotopes:
self.masses[isotope] = massRecord()
if debug:
IoLog.debug(f"{i} masses uploaded into sample record")
def updateScans(self, debug=False):
datPath = self.filePaths["DAT"]
datHdr = getDatHdr(datPath)
scans = getDatScans(datPath, datHdr)
IoLog.debug(f"Scan header data:{scans}")
self.scans["ACF"] = scans[:, 12] / 64
# print(self.scans["ACF"])
self.scans["FCF"] = scans[:, 34] >> 8
self.scans["EDAC"] = scans[:, 31]
self.scans["scanTime"] = (scans[:, 19] - scans[0, 18]) / 1000
if debug:
IoLog.debug("Scan header data uploaded to sample record")
def getMassScanValues(self): # NEED TO FIGURE OUT ORDER OF RECORD TO DETERMINE WHEN TO ITER CHANNEL
datPath = self.filePaths["DAT"]
datHdr = getDatHdr(datPath)
scans = getDatScans(datPath, datHdr)
# SELECT FIRST ROW TO MASK KEYS FOR POPULATING RAW DATA
# dwell, magnet mass, actual mass, and tructated mass are pulled from 1st scan (i.e. not a time series)
# for Intensities the column at the current index is selected (i.e. all scans)
parseRow = scans[0, :]
mass = 0
idx = 46
channels = 0
dwell = 0
ID = self.isotopes[mass]
for x in parseRow[46:]:
key = x & DAT_TYPE_MASK # DAT_TYPE_MASK = 0xF0000000
dataBits = x & DAT_DATA_MASK # DAT_DATA_MASK = 0x0FFFFFFF
if key == KEY_DWELL:
dwell = dataBits
idx += 1
elif key == KEY_MAG:
ID = self.isotopes[mass]
self.masses[ID].magMass.append((dataBits * 1.0) / 2.0 ** (MAG_DAC_BITS))
idx += 1
elif key == KEY_MAGF:
magMass = self.masses[ID].magMass[-1]
edac = self.scans['EDAC'][0]
actMass = 1 / (float(dataBits)) * magMass * edac * 1000
self.masses[ID].actMass.append(actMass)
# print(f"{ID} actMass @{idx} = {self.masses[ID].actMass}")
idx += 1
channels += 1
elif key == KEY_INTENSITY:
detType = dataBits & DETECT_TYP_MASK
allScans = scans[:, idx]
iExp = (allScans & DATA_EXP_MASK) >> EXP_SHIFT
iBase = allScans & DATA_BASE_MASK
iFlag = (allScans & DATA_FLAG_MASK) > 0
values = np.where(iFlag, iBase * float("nan"), iBase * 2 ** iExp)
if detType == KEY_PULSE:
detector = "pulse"
elif detType == KEY_ANALOG:
detector = "analog"
elif detType == KEY_FARADAY:
detector = "faraday"
else:
detector = "" # THROW EXCEPTION
# np.append (self.masses[ID].pulse, values, axis = 0)
dataToAppend = np.expand_dims(values, 1)
if channels == 1:
self.masses[ID].raw[detector] = dataToAppend
else:
self.masses[ID].raw[detector] = np.append(self.masses[ID].raw[detector], dataToAppend, axis=1)
idx += 1
elif key == KEY_END_OF_MASS:
self.masses[ID].aveMass = np.mean(self.masses[ID].actMass)
self.masses[ID].truncMass = round(self.masses[ID].aveMass / MASS_NUMERIC_PRECISION) * MASS_NUMERIC_PRECISION
self.masses[ID].channels = channels
self.masses[ID].dwell = dwell * channels
channels = 0
mass += 1
idx += 1
elif key == KEY_END_OF_SCAN:
# print(f"End of Scan @{idx}")
name = self.name
IoLog.debug(f"{name} DAT file read complete")
def calculateMassIntensities(self):
ACF = self.scans["ACF"]
FCF = self.scans["FCF"]
pMax = PULSE_THRESHOLD
ignoreFaraday = IGNORE_FARADAY
for isotope in self.masses:
self.masses[isotope].processTimeSeries(pMax, ACF, FCF, ignoreFaraday)
""" BEGIN MASS RECORD CLASS"""
class massRecord:
def __init__(self):
self.magMass = []
self.actMass = []
self.aveMass = 0.0
self.truncMass = 0.0
self.channels = 0
self.dwell = 0
self.raw = {
"pulse": [[]],
"analog": [[]],
"faraday": [[]]
}
self.reported = None
self.massTimeSeries = None
def processTimeSeries(self, pMax, ACF, FCF=None, ignoreFaraday=True):
# Multiply raw analog ADC values by stored ACF to get equivalent counts
self.raw["analog"] = (self.raw["analog"].T * ACF).T
# Filter pulse count array NaNs or P-greater-than-threshold values are replaced with corresponding ACF-scaled analog value
useAnalog = np.logical_or(self.raw["pulse"] > pMax, np.isnan(self.raw["pulse"]))
self.reported = np.where(useAnalog, self.raw["analog"], self.raw["pulse"])
if not ignoreFaraday:
self.raw["faraday"] = (self.raw["faraday"].T * FCF).T
self.reported = np.where(np.isnan(self.reported), self.raw["faraday"], self.reported)
# Average across channels in a cycle ignoring NaNs to produce time-series
self.massTimeSeries = np.nanmean(self.reported, axis=1)
""" BEGIN LOW-LEVEL INF PARSING FUNCTIONS"""
def getInfTimestamp(infPath):
with open(infPath, mode='rb') as inf:
inf.seek(OFFSET_DATETIME)
infSecs = struct.unpack('<1l', inf.read(4))
infTime = datetime.datetime.fromtimestamp(infSecs[0])
inf.close()
return infTime
def getInfHeader(infPath):
with open(infPath, mode='rb') as inf:
inf.seek(OFFSET_FIELDS)
fields = ord(inf.read(1))
inf.seek(OFFSET_REGISTRY)
tmp = inf.read(fields * 8)
infVals = struct.unpack('<%dQ' % fields, tmp)
infHdr ={}
for x in infVals:
key = (x & MASK_TOK) >> 32
infHdr[key] = {}
infHdr[key]["type"] = (x & MASK_TYP)
infHdr[key]["pointer"] = ((x & MASK_PTR) >> 8)
infHdr[key]["length"] = (x & MASK_LEN) >> 44
infHdr[key]["flag"] = (x & MASK_FLG) >> 40
inf.close()
return infHdr
def getInfDeadTime(infPath, infHdr):
with open(infPath, mode='rb') as inf:
tmp = infHdr.get(KEY_DEADTIME)
inf.seek(tmp['pointer'])
length = tmp['length']
entries = length / 2
deadTime = struct.unpack('<%dh' % entries, inf.read(length))[0]
inf.close()
return deadTime
def getInfRunsPasses(infPath, infHdr):
with open(infPath, mode='rb') as inf:
tmp = infHdr.get(KEY_RUNS)
inf.seek(tmp['pointer'])
length = tmp['length']
entries = length / 2
rp = struct.unpack('<%dh' % entries, inf.read(length))
inf.close()
evalPar = {"runs": rp[0], "passes": rp[1]}
return evalPar
def getInfMasses(infPath, infHdr):
with open(infPath, mode='rb') as inf:
tmp = infHdr.get(KEY_MASSES)
inf.seek(tmp['pointer'])
length = tmp['length']
entries = length / 2
masses = struct.unpack('<%dh' % entries, inf.read(length))[0]
inf.close()
return masses
def getInfIsotopes(infPath, infHdr):
with open(infPath, mode='rb') as inf:
tmp = infHdr.get(KEY_MASS_ID)
inf.seek(tmp['pointer'])
length = tmp['length']
entries = length / 8
massIDs = struct.unpack('<%dQ' % entries, inf.read(length))
isotopes = []
for x in massIDs:
inf.seek((x & MASK_PTR) >> 8)
length = int((x & MASK_LEN) >> 44)
masses = inf.read(length)
masses = masses[10:40]
massID = masses.decode("utf-16-le")
massID = massID.rstrip('\x00')
isotopes.append(massID)
return isotopes
""" BEGIN LOW-LEVEL DAT PARSING FUNCTIONS """
def getDatHdr(datPath):
with open(datPath, mode='rb') as dat:
Go to position and read pointer to header start
dat.seek(OFFSET_HEADER_START)
start = struct.unpack('<1L', dat.read(4))[0] + 4
# Go to position and read pointer to header length
dat.seek(OFFSET_HEADER_LENGTH)
length = struct.unpack('<1L', dat.read(4))[0]
readBytes = length * 4
# Go to start and read length records of scan positions
dat.seek(start)
datHdr = struct.unpack('<%dL' % length, dat.read(readBytes))
dat.close()
return datHdr
def getDatScans(datPath, datHdr):
with open(datPath, mode='rb') as dat:
length = datHdr[1] - datHdr[0]
nVals = int(length / 4)
rows = len(datHdr)
cols = nVals
datScans = np.zeros((rows, cols), dtype=np.int32)
i = 0
for x in datHdr:
dat.seek(x)
scan = struct.unpack('<%dL' % nVals, dat.read(length))
datScans[i, :] = scan
i += 1
dat.close()
return datScans
def getDatTimestamp(datPath):
with open(datPath, mode='rb') as dat:
dat.seek(OFFSET_TIMESTAMP)
tmp = struct.unpack('<1L', dat.read(4))
datTime = datetime.datetime.fromtimestamp(tmp[0])
dat.close()
return datTime
def getDatFilePaths(datPath):
with open(datPath, mode='rb') as dat:
pathOffset = 356
filePaths = {"DAT": "", "MET": "", "TPF": ""}
Read Dat File Path String
dat.seek(pathOffset)
length = struct.unpack('<1L', dat.read(4))[0]
pathOffset += 4
readBytes = length * 2
dat.seek(pathOffset)
tmp = dat.read(readBytes)
path = tmp.decode("utf-16-le")
filePaths["DAT"] = path.rstrip('\x00')
# Read Method (*.met) File Path String
pathOffset = pathOffset + readBytes + 16
dat.seek(pathOffset)
length = struct.unpack('<1l', dat.read(4))[0]
pathOffset += 4
readBytes = length * 2
dat.seek(pathOffset)
tmp = dat.read(readBytes)
path = tmp.decode("utf-16-le")
filePaths["MET"] = path.strip('\x00')
# Read Tune (*.tpf) File Path String
pathOffset = pathOffset + readBytes
dat.seek(pathOffset)
length = struct.unpack('<1l', dat.read(4))[0]
pathOffset += 4
readBytes = length * 2
dat.seek(pathOffset)
tmp = dat.read(readBytes)
path = tmp.decode("utf-16-le")
filePaths["TPF"] = path.strip('\x00')
dat.close()
return filePaths
""" BEGIN FUNCTIONS FOR IOLITE DATA ASSEMBLY """
def uploadToIolite(smpData, laserOnDelay = 0):
timeStart = smpData.fileTimes["DAT"].timestamp()
relTimes = smpData.scans["scanTime"]
cycleTimes = timeStart + relTimes
DISABLED AUTO-GENERATE BEAM SECONDS
relTimes -= laserOnDelay
BeamSeconds = np.maximum(relTimes, np.zeros_like(relTimes))
bsExists = 'BeamSeconds' in data.timeSeriesNames(data.Intermediate)
data.createTimeSeries(
"BeamSeconds",
data.Intermediate,
cycleTimes,
BeamSeconds
)
print("beamseconds created")
# UPLOAD MASS-LEVEL DATA
for isotope in smpData.masses:
a = smpData.masses[isotope]
np.append(cycleTimes, cycleTimes[-1] + 0.001)
np.append(a.massTimeSeries, float("nan"))
data.addDataToInput(
isotope,
cycleTimes,
a.massTimeSeries,
{"machineName": "Thermo Element"}
)
#print(type(data.timeSeries(isotope).time)
# SET CHANNEL (MASS-LEVEL) PROPERTIES)
chData = data.timeSeries(isotope)
chData.setProperty("Units", "CPS")
chData.setProperty("Mass", a.truncMass)
chData.setProperty("AMU", a.aveMass)
chData.setProperty("Dwell Time (ms)", f"{a.dwell/1000}")
# UPLOAD FILE-LEVEL METADATA
data.createFileSampleMetadata(
smpData.name,
datetime.datetime.fromtimestamp(timeStart),
datetime.datetime.fromtimestamp(cycleTimes[-1]),
importer.fileName
)
data.createImportedFileMetadata(
datetime.datetime.fromtimestamp(timeStart),
datetime.datetime.fromtimestamp(cycleTimes[-1]), #QDateTime.fromMSecsSinceEpoch
importer.fileName,
datetime.datetime.now(),
smpData.metaData["cycles"],
list(smpData.masses.keys())
)
def autoCreateSelections(smpData, autoBaseSel = False, autoSmpSel = False):
print("creating selections")
timeStart = smpData.fileTimes["DAT"].timestamp()
cycleTimes = timeStart + smpData.scans["scanTime"]
blGroups = {}
for grp in data.selectionGroupList(data.Baseline):
blGroups[grp.name] = grp
if not 'Baseline' in blGroups.keys():
baseGrpObj = data.createSelectionGroup('Baseline', data.Baseline)
blGroups['Baseline'] = baseGrpObj
if autoBaseSel:
setSelection(timeStart, blOffset, blDuration, blGroups['Baseline'], smpData.name)
smpGroups = {}
for grp in data.selectionGroupList(data.Sample):
smpGroups[grp.name] = grp
if not smpData.group in smpGroups.keys():
smpGrpObj = data.createSelectionGroup(smpData.group, data.Sample)
smpGroups[smpData.group] = smpGrpObj
if autoSmpSel:
setSelection (timeStart, opOffset, opDuration, smpGroups[smpData.group], smpData.name)
def setSelection (timeStart, offset, duration, groupObj, smpName):
#selStart = QDateTime()
#selEnd = QDateTime()
startSecs = (timeStart + offset)
#selStart.fromMSecsSinceEpoch(startSecs1000)
endSecs = (startSecs + duration)
#selEnd.fromMSecsSinceEpoch(endSecs1000)
print(f"start seconds:{startSecs}")
data.createSelection(
groupObj,
QDateTime.fromMSecsSinceEpoch(startSecs*1000),
QDateTime.fromMSecsSinceEpoch(endSecs*1000),
smpName
)
# print("Baseline selection complete")
`