- pagebs's home page
- Posts
- 2017
- June (1)
- 2016
- 2015
- 2014
- December (2)
- November (1)
- October (2)
- September (4)
- August (1)
- July (2)
- June (2)
- May (3)
- April (2)
- March (2)
- February (2)
- January (1)
- 2013
- November (1)
- October (3)
- September (2)
- August (3)
- July (4)
- June (4)
- May (2)
- April (2)
- March (2)
- February (4)
- January (2)
- 2012
- December (2)
- November (3)
- October (2)
- September (1)
- August (3)
- July (3)
- June (6)
- May (2)
- April (3)
- March (3)
- February (2)
- January (2)
- 2011
- December (2)
- November (1)
- October (7)
- September (3)
- August (2)
- July (5)
- June (2)
- May (2)
- April (4)
- March (2)
- January (1)
- 2010
- December (2)
- October (4)
- September (1)
- August (4)
- July (1)
- June (2)
- May (2)
- March (4)
- February (2)
- January (2)
- 2009
- December (1)
- November (2)
- October (1)
- September (2)
- August (1)
- July (2)
- June (1)
- May (2)
- April (2)
- March (1)
- February (1)
- January (6)
- 2008
- My blog
- Post new blog entry
- All blogs
Run 9 200GeV Dijet Run QA
Here I detail my dijet specific run QA ...
The dijet and high pt track studies I have done up to now have used the run list compiled by Pibero for his DIS2011 inclusive jet results. This is fine for testing but I wanted to do an independent QA which focuses on dijet quantities and problems in the EEMC.
I start my QA with the first and second priority run lists used to set production priority in for run 9. From these lists, I keep only the production2009_200Gev_Hi, production2009_200Gev_Lo, or production2009_200Gev_Single runs. I then remove runs which are shorter than 3 min or which are missing the emc, eemc, or tpc detectors. I then merge the two lists to get: PRIORITY1and2_Long.txt . This list contains 1269 runs and is the base for the rest of the QA.
The next step in the QA is to plot the average of important quantities as a function of run. Runs which fall outside 5 sigma from the global average are discarded. An example of some QA plots can be found below.
Figure 1: This figure shows some of the plots I use in my QA. Each point is the average of the quantity in the title for that run. The red line is the average of the points over all runs. The green lines are 5 sigma away from the average.
This pdf contains all the plots I use in my QA. The track and tower quantities are for only tracks and towers found in jets, not every track or tower in the detector. The jet, track, and tower quantities all have three variations, All, Hi, and Lo. The 'All' catagory includes all jets returned by the jet finder. The 'Hi' and 'Lo' catagories are for only those jets which pass my dijet conditions (two highest pt jets, one jet must have fired L2JetHigh or JP1, jets must be back to back). The 'Hi' is for the high pt jet and 'Lo' is for the low pt jet.
After the QA is run, I get a list of runs which fall outside +/- 5 sigma from the global average. There are 145 runs which don't pass the 5 sigma QA, this list gives the run indices which fail and the quantities for which they fail. I remove the 145 bad runs from PRIORITY1and2_Long.txt to get the list PRIORITY1and2_QAd.txt which contains 1124 runs.
For an asymmetry analysis, we need to know the beam polarizations and relative luminosities for each run. So the next step in the QA is to remove runs which don't have this information. Note that for a cross section analysis, the runs removed in this step can be included. Removing runs without relative luminosity information from the PRIORITY1and2_QAd.txt list leaves 898 runs. Removing runs without beam polarization information leaves 1082 runs. Removing runs without relative luminosity or beam polarization leaves 864 runs. These are in the PRIORITY1and2_LumiPol.txt list.
The final step in the QA is to remove runs with two types of problems. Some of the jet tree files used did not close properly when created and therefore cannot be read in the analysis. There are 68 corrupt files in the PRIORITY1and2_Long.txt list, these runs can be found here. Removing these corrupt runs from the _LumiPol list leaves 810 runs. The second problem is that 70 runs do not have valid spin bit information. A list of these runs can be found here. Removing only the no-spin runs from the _LumiPol list leaves 813 runs. Again, for a cross section analysis, we could keep the runs without spin bin information.
The final run list comes from removing the corrupt and no-spin runs from the _LumiPol list. The final list contains 759 runs:
diJetList_golden_origNumbered.txt
Pibero's run list contains 952 runs whereas my final list contains 759, it would be good to know what accounts for the 193 run discrepency. I have created a file which lists the runs that appear in Pibero's run list but which do not appear in mine. There are 200 unique runs which are in Pibero's run list but not mine.
- 43 runs were excluded because they failed my requirement that the run length be greater than 3 min
- 53 runs were excluded because they were corrupted. Note that many of these may be recoverable
- 60 runs were excluded because they had invalid spin bit values. Note that these could be used in a xsection analysis
- 53 runs were excluded because they failed my 5 sigma run-by-run QA
- Note that 9 runs had invalid spin bits an failed my 5 sigma QA and so are double counted
I have also created a file which lists the 7 runs which appear in my golden list but do not appear in Pibero's list.
- pagebs's blog
- Login or register to post comments