*** Notes for pp 500, run 2009
My notes from 2009 run
- Feb 21, we already have 8 tubes which have to be permanently masked [due to HV problem]:
SoftIds 1019,1221,1222,1223,1224,3137,3217,4432.
I am working on recovering them before we start.
Stephen Trentalange - EMC timing scans (all 4 sub-systems). runs: 10060041 - 10060058 Also,
runs 10059043, 10059044, 10059045, 10059062,10059067,
for ESMD timing. - I have added a helper script:
% FOlocate 10060041-10060058 10060043
- BTOW timing scan + Analysis from Alice
HV file = 2009_i0b.scv Slow Controls Global Delay = 20 ns 200K Events per run Event rate limited by BTOW ~1Khz TCD Phase Run Number 12 10065031 17 10065032 22 10065033 27 10065034 30 10065035 33 10065036 36 10065037 39 10065038 42 10065039 45 10065040 50 10065041 55 10065042 60 10065043 65 10065044 70 10065045
- Will's page with Endcap: HV, timing, ETOW,ESMD for 2009 run
- day 66 , lot of calib runs: Run 10066160 new +old HV, test new btow mapping Run 10066041
- This is timing for BTOW crates used in the old/new HV test runs 160 & 163
in the configuration file, the phase=36 and an overall global delay of 20 ns loaded into the crates. The individual crate delays are in the crate configuration files. Here is a list of their values: Crate Delay 01 66 02 48 03 31 04 16 05 68 06 47 07 24 08 8 09 70 0A 38 0B 4 0C 14 0D 17 0E 33 0F 64 10 57 11 43 12 26 13 12 14 60 15 38 16 14 17 0 18 63 19 46 1A 13 1B 21 1C 28 1D 43 1E 58 Global Delay = 20 phase = 36
- L0 trigger calibration: http://www.star.bnl.gov/HyperNews-star/get/startrig/3753.html , conversion of DSM ADC --> GEV
Nominal values are given by: -- Towers: (Thr)*0.236 -- Jets: (Thr-5)*0.236 Preliminary values: 500 GeV 200 GeV BEMC-HT-th0: 11 11 BEMC-HT-th1: 15 15 BEMC-HT-th2: 19 17 BEMC-HT-th3: 25 23
- First pp500 production run:10068027,
- 106.6 ns is the clock period (9.383 MHz)
- BHT rate vs. DSM threshold:
Runs 10069059 through 64. Run # Barrel HT thres. Rough trigger rate 59 11 ~1600 Hz (trigger rate limited) 60 15 ~400 Hz 61 20 ~95 Hz 62 25 ~30 Hz 63 30 ~17 Hz 64 35 ~7 Hz
- ETOW+ESMD timing from Alice , day 65, runs 10065031-46
- Matt asked:We would like to request runs 10066028-040.
This is 1.3M events. -
mysql> select trg1,sum(nevts),sum(nevts)/180 from daqsumTrgCnts where run=10066110 and trg1=trg2 group by trg1; +------+------------+----------------+ | trg1 | sum(nevts) | rate | +------+------------+----------------+ | 0 | 95678 | 531.54 | (ZDC_coinc) | 1 | 46104 | 256.13 | (zdc_smd_east) | 3 | 394 | 2.19 | (bbc_coinc) | 7 | 19080 | 106.00 | (zdc_smd_west) +------+------------+----------------+
I've posted notices of this ntuple a few times. This one was even directed to you:
http://www.star.bnl.gov/HyperNews-star/protected/get/rts/239/1/1.html
Try:
sc.Draw("zdcx:run-10060000>>histo(50000,0.5,50000.5)","","prof");
for (int i=1;i<=50000;i++) {
float zdcx = histo->GetBinContent(i);
if (zdcx) printf("%d %f\n",i+10060000,zdcx);
}
-Gene - mysql> SELECT runNumber,blueFillNumber,beginTime from beamInfo WHERE
runNumber = 10081027;
+-----------+----------------+---------------------+
| runNumber | blueFillNumber | beginTime |
+-----------+----------------+---------------------+
| 10081027 | 0.00000000 | 2009-03-22 08:15:15 |
| 10081027 | 10407.00000000 | 2009-03-22 08:22:51 |
+-----------+----------------+---------------------+ - CHecking TCD phase for the ESMDThe correct query for a
specific run looks like:
mysql> select phase from run, tcd where tcd.hash=run.tcdSetupHash and
tcdId=0 and idx_rn=10140018;
mysql -h onldb.starp.bnl.gov --port 3501 Conditions_rts -s -E -e "select ..... - Scan for bad magnetic filed scale factor (sometimes is ~0 instead of ~1)
select distinct(dayofyear(beginTime)) as day from MagFactor where
beginTime > '2009-02-17' and ScaleFactor > -0.9 and ScaleFactor < 0.9 and
dayname(beginTime) != 'Wednesday' order by day asc; - Setup of production triggers by Bill. For the configuration "production2009_test" I adjusted the rates to be:
- bbc_coin ~ 50 Hz (early in store). This is the trigger
conponent that will fire all of the detectors included in the Run.
- I adjusted the zdc_smd_east (& west) so that they each fire at
about 350 Hz. These triggers only readout trigger data. - TCDPhase[ns] BSMD Delay4 Run
70 97 10066028
70 98 29
70 99 30
35 100 31
70 100 32
0 101 33
35 101 34
70 101 35
0 102 36
35 102 37
70 102 38
70 103 39
70 104 40
Set BSMD timing back to 35,101 ,http://drupal.star.bnl.gov/STAR/blog-entry/aliceb/2009/mar/13/bsmd-and-bprs-runs-10066028-40 - Change to Summer time (EDT), on March 8=day 67, cdev monitor is not fixed and from now on float-date is off by on hour. Reported time=real time -1 hour. Jim knows it and wil fix it some day.
- Reasonable first guesses for the "physics" parameters for the jet
algorithms are:
L2JetLow:
- Monojet: >8 GeV in a 1x1 patch
- Dijet: >7 GeV in a 1x1 patch and >3 GeV in a second 1x1 patch,
separated by a minimum Dphi gap of 0.4 (1.4 center to center)
L2JetHigh:
- Monojet: >12 GeV in a 1x1 patch
- Dijet: >11 GeV in a 1x1 patch and >4 GeV in a second 1x1 patch,
separated by a minimum Dphi gap of 0.4 (1.4 center to center)
Carl - DAQ file size ~5GB, Rare streams: eventually I intend to group rare streams together up
to ~6 hours or 5Gb whichever comes first. However, this is not likely
to happen until the run is going stably and I have time to do it... (2
weeks? 3 weeks?)
- Perhaps the trigger board should discuss the number/name(s) for the
rare streams. Mainly, up to now there has been statements about which
triggers should be sent to express streams, but not ***which*** express
streams. I showed that the overhead for rare streams even in the
current situation is small but could be potentially significant (up to
~3% overhead per stream integrated over the full run). The overhead
comes in tape drive utilization, so depending on how the reconstruction
is staged this could be a huge falloff of reconstruction rate for
certain time periods. - Xin Dong: I did another TPC/TOF matching check with a recent run 10071041, and I
see the correlation between TPC/TOF WITHOUT additional TOF tray offset
applied to the ideal geometry position.
http://www.star.bnl.gov/protected/spectra/dongx/tof/Run9/Corr_Tray_21.gif - BTOW HV & Timing set: March 14=day 73
- ETOW timing was WRONG during full pp 500 run, explanation from Scott:
- It was wrong for the whole run. There were several mistakes made. First was to leave the tcd phase at 5 at the end of the timing scan. Second, and much more serious, was to take this value and not only save it, but save it for all run configurations. Third was that when it was determined that 21 was the correct value, it was saved for the timing scan configuration, but was _not_ pushed to all run configurations. Basically, the exact wrong choices were made all the time.
- L2W added to BHT1 trigger, L0 HT th>7.1GeV, L2W: seed>5, clust>10 GeV, rndPres=1. First run: 10074021 , +22, +25, total 3.8K L0 triggers, ~85% accepted due to cluster energy
- Summary of BPRS HV change by Will J.
- Stephen has changed & uploaded BPRS HV after run 10074056
- Intentionally unpolarized fill with polarized cdev pattern: day 74, fill F10372, STAR runs 10074051-56 show zero ZDC polarization , plot below is from Oleksandr
and CNI for blue beam:
and beam intensity
- Run 10074061 - we have new fill at 6:48 PM with 84*84 buckets, March 15, fill ?
- Run 10074077 - changed L2W algorithm parameters The randon accept parameter was set to 50, increased the cluster energy cut from 10 to 13 GeV.
- ~rcorliss/l2/official-2009c latest L2 code, but not final, march 15
- //define offsets for writing to L2Result: ~rcorliss/l2/official-2009c/ready2/l2Algorithm.h
- #define L2RESULTS_2009_OFFSET_EMC_CHECK 1
- #define L2RESULTS_2009_OFFSET_EMC_PED 2
- #define L2RESULTS_2009_OFFSET_BGAMMA 3
- #define L2RESULTS_2009_OFFSET_EGAMMA 6
- #define L2RESULTS_2009_OFFSET_DIJET 9
- #define L2RESULTS_2009_OFFSET_UPSILON 17
- #define L2RESULTS_2009_OFFSET_BEMCW 20
- #define L2RESULTS_2009_OFFSET_DIJET_HIGH 25
- //#define L2RESULTS_2009_OFFSET_BHIEN 42
- #define L2RESULTS_2009_C2_OFFSET_BHIEN 0 //this writes to the start of the C2 array.
- #define L2RESULTS_2009_OFFSET_EHIEN 0
- #define L2RESULTS_2009_OFFSET_BTOW_CAL 0
- #define L2RESULTS_2009_OFFSET_ETOW_CAL 0
- L2wResult2009_print(L2wResult2009 *p){
- printf("L2wResult2009: clust ET=%.1f seed: ET=%.1f iEta=%d, iPhi=%d, trig=%d\n",
- p->clusterEt*60.0/256.0,
- p->seedEt*60.0/256.0,
- p->seedEtaBin,
- p->seedPhiBin,
- p->trigger
-
To print info from L2W alg use the following lines:TArrayI& L2Array = StMuDst::event()->L2Result();L2wResult2009 *L2wResult=(L2wResult2009 *)&L2Array[20]; assert((runNo/1000000)==10);//only 2009L2wResult2009_print(L2wResult);if(L2wResult->trigger &0x2) { event fired due to L2W clust ET>13 GeV)
- The BPRS holes at softID 1000 and 3150 are permanent for the 2009.
- The hole in the BTOW acceptance was caused by a misconfigured crate 0x16. end of day 74, It is whole PMB box 22W, 4 rows of towers, softID [581,660], barrel modules 15:sub2, 16:full, 17:sub1, respectively. TP 81,83,85,89
- ZS for BPRS is set at 1.0*sigPed, for BSMD 1.5*sig
- FastOffline logs go to /star/rcf/prodlog/dev/log/daq ,root4star -b -q 'bfc.C(500000,"pp2009a ITTF BEmcChkStat QAalltrigs btofDat Corr3","st_physics_adc_10075016_raw_7330001.daq")'
- Speed test of minB run w/ BSMD: 400Hz+ , R10075077 , 4EMC detectors in , config=vpd_minbias
- Final ETOW timing , March 17, Will J.
- Mar 17 - band structure for BHT trigger was identified as the reason of strange energy spectrum in L2W
- Jeff : L2W is attached to BHT3 trigger, set L2W to write to the express stream. set rndAccept to 50 at the same time.
- Firts run with final BHT3=25 thresholds set by Bill 70076136,
- run 136, nW eve: 536
- 143 : 22
- 152 : 280
- 153 : 322
- 154 : 468
- 161 : 389
- meaning of Bit-1-select:
- 1: use BHT3
- 4: use BHT2 and BBCminB
- 2: use ?JP? not sure
- BBC rate
- Web page http://online.star.bnl.gov/cgi-bin/db_scripts/cgi/database_scaler.pl
-
fromtime: totime:
-
- Gene's script:I have advertised repeatedly my ntuple of RICH scalers which includes
run numbers and more ("t" is seconds since Feb. 1, 2009), available at:
/star/institutions/bnl/genevb/Scalers/ScalersRunTimesRun9/ntup.root
If you need something more recent than what is available there, cvs co the following two items:
offline/users/genevb/getRichScalers.csh
offline/users/genevb/matchRichScalers.C
- Web page http://online.star.bnl.gov/cgi-bin/db_scripts/cgi/database_scaler.pl
- BSMD summary of performance, by Willie : Blog on Mar-14, 2009
- trigger crates numbers, trigger structure :
http://www.star.bnl.gov/cgi-bin/protected/cvsweb.cgi/StRoot/RTS/trg/include/trgConfNum.h?rev=1.3 #define L1_CONF_NUM 1 #define BC1_CONF_NUM 2 #define MXQ_CONF_NUM 3 #define MIX_CONF_NUM 4 #define BCW_CONF_NUM 5 #define BCE_CONF_NUM 6 #define FEQ_CONF_NUM 7 #define BBC_CONF_NUM 8 #define BBQ_CONF_NUM 9 #define FMS_CONF_NUM 10 #define QT1_CONF_NUM 11 #define QT2_CONF_NUM 12 #define QT3_CONF_NUM 13 #define QT4_CONF_NUM 14 So then West=5 and East=6.
- March 19, HOT & masked BTOW towers in fast offline:
SoftId Crate Board/Mask Mask_Value Approx ETA/PHI
639 0x1A B5M1 FFBF +0.925 -0.340
1838 0x13 B5M1 FFDF +0.875 +2.801
2107 0x11 B2M1 BFFF +0.325 +2.068
3407 0x08 B2M2 FBFF -0.325 +1.806 - March 19, 9 pm , Run 10078069 , production2009_500GeV / Physics triggers elevated BHT3 thres=25, L2W thr>13 GeV
- March 20, very good night, F10398,F10399= 84x84= pol Patt P4, runs 10078077-10079051....
- Comparison of BSMD peds taken w/ & w/o collisions by Matt --> take peds between collisions
- RICH scalers are which for Run 9
These are going into the offline DB as:
rs1-rs10:
bbcEast
bbcWest
bbcX
bbcYellowBkg
bbcBlueBkg
zdcEast
zdcWest
zdcX
pvpdEast
pvpdWest
and rs16:
mult
while rs11-15 are being ignored.
Meanwhile, reading from the DAQ stream is the same except that the
11th element of the array as being assigned to "mult" and the
12th-16th elements of the array are being ignored. Any guidance here
would be appreciated. - Vernier scan was performed by Angelika , Friday evening
- Gene: All I said during the phone meeting was that we have the RICH scalers from last Friday (I should double check this). If you know the run, or time, or BBC rates, I can tell you the ZDC rates. It's in that RICH scalers ntuple I've told you about before.
- March 21, overnight L2-W has small trigger ID for some runs because JP trigger was tested, data are OK
- To check RHIC click used for drift velocity do
mysql -h dbx.star.bnl.gov -P 3316 -C RunLog_onl -e 'select frequency,runNumber from starClockOnl where runNumber=10076136' +--------------------+-----------+ | frequency | runNumber | +--------------------+-----------+ | 9383500.0000000000 | 10076136 | +--------------------+-----------+
- To check RHIC fill star/stop date do:
mysql -h onldb2.starp.bnl.gov --port=3502 -u deph -e "USE
Conditions_rhic; SELECT blueFillNumber,MIN(beginTime),MAX(beginTime)
from rhicBeam where blueFillNumber=10525";
+----------------+---------------------+---------------------+
| blueFillNumber | MIN(beginTime) | MAX(beginTime) |
+----------------+---------------------+---------------------+
| 10525.00000000 | 2009-04-10 00:18:54 | 2009-04-10 11:23:25 |
+----------------+---------------------+---------------------+
That's GMT, not EDT/EST! - To get L2params from DB do, for Run 6, run number 7136022:
mysql --host=dbbak.starp.bnl.gov --port=3405 Conditions_rts
select idx_alg,userInt from lxInts where idx_rn = 7136022;
+---------+---------+
| idx_alg | userInt |
+---------+---------+
| 17 | 1 |
| 17 | 150 |
| 19 | 1 |
| 19 | 50 |
| 13 | 12 |
| 13 | 3 |
The algo id for the Upsilon was 11. You can repeat the same exercise
with lxFloats. Pibero - BFC used for test productions: root4star -b -q 'bfc.C(1e6,"pp2009a ITTF BEmcChkStat QAalltrigs btofDat Corr3 beamLine -VFMinuit VFPPVnoCTB -dstout -evout ",
- Dmitry onlineDB sanity check (ALL) web interface, e.g. for bad RHIC clock, only last 12 hours
-
Conditions_rts (onldb:3501) select clockSource from run where idx_rn= ##### The valid values are: 1 = local oscillator. 3 = RHIC clock 9.216 MHz no beam => local clock
9.383 MHz is beam on => rhic clock
-
- March 22, Single beam background is high based on ZDC single rates, reported by Carl, starops-hn, 2:50 am, F10407 day=80.985
rates MORE THAN DOUBLED w/ local clock!!! (ZDC E, W, and coincidence rates were unchanged.) The rates returned to 'normal' when we reestablished the rhicclock. The numbers: With rhicclock: BBC And: 696 K Yellow: 33.8 K Blue: 27.9 K With localclock BBC And: 529 K Yellow: 73 K Blue: 66 K The yellow and blue background rate measurements will also be artificially suppressed by the localclock, by a fraction that is larger than the suppression factor for the BBC And. Thus, I conclude the background rates are: Yellow: Between 96 K and 126 K, when measuring 33.8 K Blue: Between 87 K and 114 K, when measuring 27.9 K
- Gene reported this BBC rate saturation, starops-hn, 2 pm,
- Endcap beam background is high, Jan reported to jet-hn, at 9 am, run 10081055
- Seen clock values in MHz
- pedestal run taken with local clock 10081089
- Jan: EMSD rates changes with years ??
- Carl summary of trigger status & problems, triggerbaord-hn, 9 pm,
We have a trigger suite that provides a partial implementation of our planned physics program. It's "partial" because: (1) Ws are there (2) Other mid-rapidity spin triggers are there (with increased thresholds for Egamma, JP2, and AJP, and an increased prescale for JP1) (3) Forward physics is not there (the FMS still has commissioning to do) (4) No Upsilon trigger yet (probably will overwhelm the system if run with ps=1 at L0) (5) No non-photonic electrons yet (there is a TCU bit problem that John and Eleanor are investigating) (6) The current minbias trigger rate is unlikely to integrate more than 10M events
- Never ending discussion about L2,STP,BSMD events size, testing it, trig-hn, 4pm, lot of comments from gerard, tonko
- March 23, we still use Myrenet @ L2, FYI
- Mike preserves all L2 root files since day 82 onward
- RHIC clock lost again, runs:10082025-109 ... fixed later in DB.
- Tower SoftId 1075 (L2ID=06td35) is masked out after run 10082027in both the FEEs and L2.
- Mike started archiving of hist.root files @ L2, at /ldaphome/onlmon/L2algo2009/triggerName/output/
- Barrel QA plots for all 4 layers, 26 runs, 30K L2W events (4MB PDF)
- Integrated luminosity for last 4 days, by Matt
Here is that table for fills. The integrated luminosity is taken from the number of VPD events times prescale / 32 mb (which was stated by Bill Christie on Tuesday). The units are nb^-1. I overflowed the total ZDC coincidences. Fill Time nZDCCoincidences nBHT3events nWevents integratedlum(1/nb) 10383 4998 168431278 70944 6781 44.5246 10398 8666 545996675 224212 18552 109.843 10399 15227 1185228880 465184 38538 76.5381 10402 3097 387263130 74966 6331 34.0096 10403 1288 155328035 62705 5279 5.50208 10404 5577 418052604 177896 15529 61.207 10407 4727 353759444 144055 12234 50.4672 10412 14835 1361040138 492987 40991 30.7909 total: 58415 280132888 1712949 144235 LT=0.413/pb The run list can be found in /star/u/mattheww/luminosity/2009/w.run.jan
- March 24, BTOW 1 module (softID 600-720) is not triggering since midnight? e.g. run 10083041, page 2
- Run 10083032 - test2009_carl_b , disabled the L2Bgamma and L2Egamma triggers for the time being, L2W events are fine
- Collaboration meeting: talk from Dick (GEM pads), Aihong (HLT)
- new Vernier scan was done today at 10:20 am, From: christie@bnl.gov Subject: Re: vernier scan today ?
Date: March 24, 2009 4:52:02 PM EDT To: starops-hn@orion.star.bnl.gov
- Proposal of final production config using L2gamma B, E, by Carl
Subject: 500 GeV Production trigger configuration Date: March 24, 2009 3:32:08 PM EDT To: triggerboard-hn@orion.star.bnl.gov
- Low luminosity fill with 12 bunches starts ~5 pm, F10421 12x12 bunches, will be use for TPC calibration From: gene@bnl.gov Subject: Re: low luminosity fill and TPC
Date: March 25, 2009 2:02:01 AM EDT To: startpc-hn@orion.star.bnl.gov
1. As expected, signed DCAs are on the order of ~1mm. Pretty
disappointing as it means we cannot simply do calibrations
sequentially. We will need to calibrate this data for SpaceCharge &
GridLeak before doing any alignment calibrations. And we will then
need to iterate at least once. In other words, don't expect
calibrations of this data to be done any time soon.
3. Less than 80% of the events got a reconstructed vertex.
4. However, there's still plenty of pileup, as judged by the fact that
the post-membrane hits per time bucket are still about 80% of the non-
post-membrane per time bucket. Does mean that ~4 out of every 5 TPC
hits is a pileup hit? - Gene computed beam line constrain for this low lumi fill, loaded to DB for offline reco
-
V124 is a box with with circular buffer, holds 360 x 8-bit values. It presents subsequent 8-bit value to 8 wires going to STAR trigger, changes by one for every RHIC time bucket.Contents of 8 bits is: {up,down,unpol,fill} x{blue,yellow} . This 8-bits I call 8-bit spin value which is recorded in STAR event in TRG data. Off-line those get degraded to 4-bit spin values which some people used to spin sort.CAD uploads content of this circular buffer at the beginning of every RHIC fill
- New L2 pedestal , first run 1008315
- TPC sector 11 has problems for few days:
- Jamie: If I look at http://dean.star.bnl.gov/runPlots/10086046.pdf
p. 12 Sector 11 is missing quite a big chunk of padrows.
and looking at the phi distribution p. 10 the population is ~half. - Tonko: RDO1 is dead (as seen by the completely blank first rows) while the anode channel around RDO#6 seems off (as
witnessed by some small amount of charge).
I can't tell about anode channels but there was a bunch of them already dead at startup and some also seem to flicker on/off,
depending on alarms and operators... - Alexei: Sector 11 have one HV channel disconnected( in the place of RDO6) from beginning of the run and we should live with this all time.
we'll fix it coming summer.The same problem is for sector 12, RDO2. - Jan result is non-uniform TPC tracking
- Jamie: If I look at http://dean.star.bnl.gov/runPlots/10086046.pdf
- March 25, beam for physics at 1:30 am, the background is at 10%
- Jan thought : peds=0 for full day 85 runs 10085005=Fill10434.....F10439 ...
- Tonko responded: The ZS data is still OK because what is important
is the cached files in daqman:/data/scratch/PEDESTALS/BSMD_1n.thr 2n, 3n, and .ped - login : operator@daqman.starp
- Tonko responded: The ZS data is still OK because what is important
- Run 10084010 - Production2009_500GeV --- 100 k events, 4:30 Lost beams - Magnet trip in yellow ring
- Eleanor: I have removed the "ignore values of 63" logic from these algorithms. Not used yet.
- Apex whole day
- New L2W trigger defined: 230601 , it comprises Barrel(BHT3*L2W & Endcap(EHT?*L2EW) - need to unpack L2Result[] to trigger sort events
- Final production runs setup for 500 GeV: production2009_500GeVb, productionZDCpolarimetry, trigger notch at 63 fixed, forts run 10084050
- vpd dsm prescale 1000 --> 200 (I left the rate the same, so we should see x5 rates) - bgamma unchanged - egamma --> EHT2 * JP1 (and given production id) - I added FMSfast, FMSslow, FMSled-FPE (to be used when ready...) These all simultaneously configure, so we should be set for the rest of the run as far as the TCU goes. The plan is for Bill to see it running once beam returns, then for this to be copied to "production2009_500GeVb" and be the config file for the rest of the run.
- Carl: Date: March 26, 2009 9:28:02 AM EDT To: starspin-hn@orion.star.bnl.gov
So here is a quick summary of the final (I hope!!!!!) 500 GeV "spin triggers" for this year. Note that several triggers have L2 thresholds that are high relative to their L0 thresholds. That's because, when L0 thresholds were raised to tame the raw trigger rate, we left the L2 thresholds alone to keep the remaining efficiency high. A side effect of this strategy is that we aren't aborting very many events. L2-W - L0: 7.3 GeV BHT - L2: 5 GeV tower in 13 GeV cluster I don't remember the random accept fraction - Used to select events for the W express stream L2-EW: - L0: 7.3 GeV EHT - L2: 10 GeV tower 0.5% random accept fraction - Used to select events for the W express stream BHT3: - L0: 7.3 GeV BHT L2-Bgamma: (not in yet; need a new Last DSM file from Eleanor) - L0: 5.9 GeV BHT with an 8.3 GeV JP Prescale by 3 - L2: Require 8.3 GeV BJP in Last DSM 5 GeV tower in 7.5 GeV cluster Random accept 1% This really aborts events L2-Egamma: - L0: 5.9 GeV EHT with an 8.3 GeV JP Prescale by 3 - L2: Require 8.3 GeV EJP in Last DSM 5 GeV tower in 7.5 GeV cluster Random accept 1% This really aborts events JP1: - L0: 8.3 GeV JP (incl. overlaps, but not adjacents) Prescaled by 50 - L2: Mono-jet: 8.0 GeV Di-jet: 7.0 GeV opposite 3.0 GeV 10% random accept This really aborts events JP2: - L0: 13 GeV JP (incl. overlaps, but not adjacents) - L2: Monitors the recorded events; does not abort any AJP - L0: Pair of JPs adjacent in phi, each with at least 6.4 GeV FMSfast: (not in yet; detector needs commissioning) - L0: Intermediate energy FMS cluster threshold Record fast detectors only (incl. TOF, but not BSMD) FMSslow: (not in yet; detector needs commissioning) - L0: High energy FMS cluster threshold Record slow detectors (incl. FTPC) FPDE: (not in yet; detector needs commissioning) - L0: FPDE module summed energy above threshold Record fast detectors only (incl. TOF, but not BSMD) Beyond the "spin triggers", the standard configuration includes VPDMB, BBCMB, and zero_bias. There are also five overlapping BBCMB triggers (BBCMB-Cat0 through BBCMB-Cat4) to be used for live time monitoring.
- Carl identified 4 existing trigger problems: From: cggroup@comp.tamu.edu
Subject: Re: EMC Trigger issues
Date: March 25, 2009 10:12:02 PM EDT To: startrig-hn@orion.star.bnl.gov
- March 26, Gene thinks scaler data in DAQ are wrong, Tonko claims it is just a bug in software, ticket 1489
- last run with low BHT3 thr=25: 1085039
- first run with high BHT3 thr=31 : 1085096
- new tier090325 file used, production 'c', HT notch should be gone, first run 10085131 ,
- March 27, Friday
- Phenix returned to 'old rotator setup with 15% transverse polarization' and now blue pol lifetime is more stable
- RSC meeting presented H-jet vs. pC calibration for fills 10492,404,407,408,412
- Hjet/bluePol1=0.99 +/- 0.09
- Hjet/bluePol2=0.79 +/- 0.07
- Conclusion pol1 matches HJet calibration, pol2 needs 20% correction.
- Great fill F10448 , long, high intensity, high pol ~35% in both beam, we took a lot of data
- March 28, Saturday. Fill continue, next was was also long. BSMD ch 86 tripped again, as noted in the shift log
- Oleg said: You will not see hits in this (one out of 120) modules for a few min. from this module, untill HV will be restored automaticaly, this does not cause run to trip.
- Unpacking of L2Result[.] data, by Pibero
- Offline QA histograms which are now being generated separately for the
W triggers (any trigger whose offline trig ID = *6xx, actually, which
I label as "other physics").
http://www.star.bnl.gov/cgi-bin/protected/starqa/qa
Enter a username, chose "2.1 shift work", "all histograms", "combine
several jobs", "OK", pick one or more runs (it will combine them all
into one set), wait a moment, and then chose "other physics".
These clearly demonstrate the dead parts of the TPC, and things like
the two hot barrel towers which are presently un-masked.. - BSMD-less events:
- Carl explains: The event IDs that should make it to the W express stream are 230601
and 230610. The former (L2-W) should always include BSMD read-out. The latter (L2-EW) will only include the BSMD if it was
requested by some other trigger that is satisfied simultaneously.
- Carl explains: The event IDs that should make it to the W express stream are 230601
- CDEV reader program (askCdev)
- is located at /home/sysuser/epics.3.14.7/cdev_rhic/CDEV/sowinskiApp/cdevQuery
- machine: sc.starp
- BSMD crate 2 in Willis's plot (crate 33 in hardware) is OFF since March 27, Friday
- March 29, Sunday, new BSMD ped run 10088051 pedestal_rhicclock
- Absolute lumi run 10088078 , low dead time, only BHT3=31 + BBC,ZDC prescaled
- BHT3/baseZDC=1.64e-4
- ZDC cross section from Angelika=2.3 mb, crude
- Joe's calculation of luminosityHi All,
Here is the calculation of the run9 cross section number. So Angelika gave us 4 vernier scan luminosities
fill# date time L(10^31 s^-1cm^-2)
10207 02/23 11:06-09 0.3
10276 03/01 11:32-36 1.8
10399 03/20 13:24-30 2.6
10415 03/24 10:02-08 2.8
Only the last one of those time periods had a trigger mix that seem to working correctly. I looked for the closest (in time) run number before the scan (run# 10083059) and got the ZDC rate as given by the Trigger Details link in the RunLog. That number was 71513.5. All runs near the other scans gave the ZDC rate as 1000 exactly and I figure that is not correct so I will only calculate the cross section from this last number. So starting with the standard formula
dN/dt = Rate = xsec*instant_lumi
and inverting
xsec = Rate/instant_lumi = 7.15e4/2.8e31 = 2.554e-27 (cm^2) = 2.554 mb
Note that this number disagreed with the number Bill remembered earlier, but provided I am getting the correct ZDC rate this should be correct. Now from run# 10088078 (that we took earlier today) we know that the ratio of the BHT3 xsec to the ZDC xsec is 1.64e-4. Recall that we restricted the run to a low rate so that dead time is not an issue in any of the determinations.
xsec(BHT3)/xsec(ZDC) = 1.64e-4
xsec(BHT3) = 1.64e-4*xsec(ZDC) = 4.18e-4 mb
So to get a luminosity measure from this you need to take
N(BHT3)/4.18e-4 = Int_lumi (mb^-1)
To get the number in pb^-1 you need to multiple by a factor of 10^-9 which gives you
N(BHT3)/4.18e5 = int_lumi (pb^-1)
That should give you the formula for calculating the integrated lumi in terms of pb^-1 from the BHT3 counts. - Bill C.: Attached to this E-mail please find a Power Point file where I've used two of the Vernier scans done by Angelika, along with the RICH scaler rates, to calculate the cross section for the zdc_coin trigger. As I write this it occurs to me that I should also correct the ratio of the BHT3 trigger to the zdc_coin rate that we extracted from the trigger scalers to correct for the random zdc coincidence rate during the run where we measured this ratio. I have to run to another mtg, I'll correct this later.
- Status of W data taking, on March 30, 2009
- March 30, Monday, machine development today 5-8 hours expected.
- Jeff worked on BSMD test.
Both production2009_500GeV and production2009_500_c were modified.
The production2009_500 was modified at the midnight. Two subtrigger were taken out.
The production2009_500_c was modified in the morning.
Dan modified minBias prescale.
- Jeff worked on BSMD test.
- Some runs have problem with one barel west module:
- gap is seen :10087031,
- 109x107 blue*yellow , new abort gap pattern
- Yellow abort gap by removing two additional Yellow bunches. We should
see this change from now until the end of the 500 GeV running.
- Yellow abort gap by removing two additional Yellow bunches. We should
- Bad BSMD/BPRS channels ,
- by Willie (March 30) are 5401-5550 (Eta and Phi) and the following BSMD-phi channels:
10351,10353,10355,10357,10359,10361,10363,10365,10367,10369,10371,10373,10375,10377,10379 - by Will J., December 15, 2008 and comments to my mail on April 1st 2009: Will , Gerard ,
- by Willie (March 30) are 5401-5550 (Eta and Phi) and the following BSMD-phi channels:
- Bill: I've changed the "Fast DMA" parameter for the production2009_500Gev_c run configuration from 1 to 0.
- March 31, Tuesday, 12.15am, R10090001 production2009_500GeV_c configuration was modified such that it has the BSMD in each event and an increased VPD minbias of 90Hz from 40 Hz
- Jeff: In the original file 100851832, all the events without BSMD had L2EW and not HT3/L2BW.
- Run 10090100 , Now events all in W-stream have BSMD data- good. However there is 248 more events in the W-daq file then # of events sent to L2W-algo (=1180). Trigger ID's of those events are below, ~all events had L2-EW=230610 - this is OK
- Example of TPC instability:We have seen the following TPC anode trips:
6:00pm sector 4 -- channel 2 (disabled)
9:01pm sector 7 -- channel 6
9:07pm sector 15 -- channel 1 (disabled)
9:37pm sector 18 -- channel 4
9:45pm sector 23 -- channel 1
10:14pm sector 18 -- channel 4
10:47pm sector 15 -- channel 2
11:35pm sector 15 -- channel 2
12:01am sector 15 -- channel 2 (disabled)
-- Frank Geurts - Joe took another luminosity measurements using BHT3=25, run 10090084
- machine development from 6-8 pm. Then turn the beam over to APEX at 8 pm.
- April 1, Wednesday, long access
- Another lumi estimate by bill C.,
This is a final (?) followup on the earlier discussion about the BHT3 cross section.
To date (through Tuesday March 31st) I tabulate that we've collected:
2,344,301 BHT3 evts with Trigger ID 230530 (thres. ~ 5.9 GeV)
963,419 BHT3 evts with Trigger ID 230531 (thres. ~ 7.3 GeV)
Uses Carl's rough estimate (with caveats that this is lower bound, though I think it's likely fairly close) for the cross section of
trigger ID 230531 of .54 ub, and my rough estimate for the cross section
of trigger ID 230530 of 1.77 ub (calculated using run 10085024), I calculate that we've sampled:
(2,344,301)/1.77 ub = 1.3 pb**-1
(963,419)/0.54 ub = 1.8 pb**-1
For a total sampled luminosity of ~ 3.1 pb**-1 for the W trigger. We'll try to determine these cross sections with more precision
tomorrow and recalculate this, but I think this rough estimate is likely fairly close to the actual value. - Oleg: I changed settings on PS24A and PS27A, thus pedestals for BTOW 0x1a, 0x1b, 0x1c were changed.
- April 2, Thursday, during early morning fill :disabled the biggest TPC trip culprit 18-4
- anticipated beam dump at 9 am, fill F10471 , Oleg said the barrel hole is covering 2 modules, 1/60 of detector/ prepare to physics was not? executed before this fill and we had thos hole for the whole fill.
- Joe's scheduler BFC script:
/star/u/seelej/Walgo/W_eve/oneJobTempl.xml
for my .xml file with the bfc.C command. And then I don't have script that calls it but a command line
cat day90a.list | awk '{print "star-submit-template -template oneJobTempl.xml -entities daynum=90,filename="$1}' | sh
where day90.list is just a file list (no directories.. just the list). - Willie made summary file for ped residua for 4 runs (6+...13?) from day 93 using ref peds from run 10092061, all looks good except 1 BSMD module on page 65
- April 3, Friday, during fill F10476 , STAR trigger got completely sick, AJP took over it. The last reasonable run is 10093012, run 25-36 are junk,
- John N. explains: shift crew had removed BCE and BC1 to "address crate timeouts".
This was a mistake. The timeouts are caused by a crash in the BBC
CPU. This has a knockon effect on BCE and BC1 which are
blameless. If you remove BCE and BC1 from the run, then the crash
in BBC will have knockon effects in other crates which are also
blameless.
My advice: do NOT remove trigger detectors from the run without
asking a trigger expert. You cannot fix crashes in the BBC crate by
removing BCE, BC1 or any other trigger detector. - Also lot of magtets trips, Pibero thinks the hole fill 476 is junk - he is shift leader on the next shift
-
this 3 barrel crates ( have still wrong peds in L2,10093012 page 7I change L2 ped in to the most recent run taken at 3 am today21 Apr 3 10:35 pedestal.current -> run10093012.l2ped.log, First run after is 10093047, crashed , but *not* due to bad L2peds.
- Preparation for vernier scan, 11:30 am, run config: vernier_scanr_scan
- Jack worked on repairing of the trigger, first fixed run is : 100930082, it has new hole in BTOW, page 4 of L2Bcalib plot. see also run 84- hole is different.
- Bill simplified run config, L2W and almost nothing else, dead time<5%, 30 minutes long good run 10093131
- April 4, Saturday, 1am Ross masked tower softID: 1365, crate: 22, chan: 32, new mask file for L2 is towerMask.pp_2009-04-04, the one previously used was 2009-03-23. It was masked in L0 at 3 am,after run 10094019.
- we are taking data with minimal not-crashing configuration:
- detectors: emc bsmd tpx
- phys trig:L2W,L2-Bgamma, no VPD
- Oleg: BTOW copy cat crate 27, One board was masked out on Sat. night (see shiftlog)
- BSMD,BPRS pedestals inspected for 10094023 -looks good. We are using now 2 days old pedestals from run 10092061, .
Pages 5-14 show BPRS pedestal residua, they fluctuate by +/-1 ADC count. Looks good to me.
Remaining pages show BSMD residua.
The sections with strip ID starting at 50+N*150 and spanning 50 strips have very wide pedestals and for those residua are often scattered even +/- 10 ADC . We can't do anything about it - new pedestal set will not change this.
Other then that:
* page 35, bottom middle, systematic drift of ~10 ADC counts
* we are missing data in one module (?) page 47 bottom . Oleg, is this module working properly? - Missing BHT2 bits: 10094023, page 7 Carl: It arises from the two missing BHT2 bits that are clearly visible in the bottom plots on page 1 of the L0 monitoring plots, Run 23 had the L2-Bgamma trigger turned on. You see two turn-on bands, one at 31 (BHT3) and a lower one at 25 (BHT2).
- 10:43 pm, Carl found out how to add JP back to trigger (BSMD 0/+ was used instead 0/- for Endcap triggers and BSMD readout was initiated when BSDM was dead), first long run is R10094095 dead time still~0
- April 5, Sunday, no good beam and/or daq/trigger problems resulted w/ no good data till 7:30 am
- BSMD dead time is high & erratic after BMSD was not readout for JP triggers, see run 10095024, it was much more stable for run 10092011
- F10490 has high luminosity:
- Run 10095022, scaler boards 5, 6, 11, and 12 all imply the
VT201/0 rate was >~ 1.1 MHz. The analogous rate calculated from
BBCMB-Cat0 events, which include only trigger detectors, was ~850 kHz. - The missing ingredient ["hidden" dead time for trigger-detector-only]
was TRG+DAQ only. I called into Christine and asked her to do
this. It's run 10095028. There the VT201/0 rate from the scalers
is >~ 1 MHz and the analogous rate from BBCMB-Cat0 is ~750 kHz
- Run 10095022, scaler boards 5, 6, 11, and 12 all imply the
- April 6, Monday, good store overnight, yesterday BSM dead time was high, this morning was low, no clue why
- integrated luminosity for the W stream, \ by Matt, using Bill's xsectotal. Using Bill's numbers (from http://www.star.bnl.gov/HyperNews-star/get/startrig/3888.html) for the the BHT3 cross section at the two thresholds (trgID = 230530, thresh = 25, xsec = 1.77 ub and trgID = 230531, thresh = 31, xsec = 0.54 ub), I calculated the integrated luminosity for a run list that met the following criteria:
daqSetupName contains "production2009_500Ge" : 3 config : Gev, Gev_b, Gev_c
number of BHT3 events > 2000 (~ 2 minutes)
marked as good by RTS and Shift Leader
- ended with run 10088065:
trgID nBHT3events (M) LT (pb ^-1)
230530 1.90 1.07
230531 0.55 1.03 - ended with run 10094024, ~9am
230530 2.05 1.15
230531 1.45 2.68
total: 3.83 pb^-1 - ended R10096027 , ~9 am
230530 (lower thresh) 2.04M events, 1.15 pb^-1
230531 (higher thresh) 2.21M events, 4.10 pb^-1
Total: 5.25 pb^-1 - As of run 10097038, ~9 am
trg 230530: 2.04M events, 1.15 pb^-1
trg 230531: 2.37M events, 4.39 pb^-1
total: 5.54 pb^-1
trg 230530: 18.0 DAQ hours
trg 230531: 50.5 DAQ hours
Total L2W events: 441k
DAQ hrs for ZDC pol: 43.5 - as of run 10098046 ~9am
230530: 2.04M 1.15 pb^-1 18.0 hrs
230531: 2.60M 4.81 pb^-1 54.2 hrs
total: 5.96 pb^-1
L2W: 484K, zdc: 46.65 hrs - as of run 10099084, 9 am. BSMD was not read out since previous report, increment in LT from yesterday is of questionable value
230530: 2.04M 1.15 pb^-1 18.0047 hrs
230531: 2.72M 5.03 pb^-1 56.1192 hrs
L2W: 507K, zdc: 46.6656 hrs - requirement is added have tpc, btow, bsmd in the run
for reference again before R10096027 (-0.02/pb ~ nothing)
230530: 2.03877M 1.15 pb^-1 18.0 hrs
230531: 2.20452M 4.08 pb^-1 47.3 hrs
L2W: 409.56K, zdc: 42.8 hrs
For April 10 , 9 am, between fills, up to run R10100032 the totals:
230530: 2.03877M 1.15 pb^-1 18.0 hrs
230531: 2.87421M 5.32 pb^-1 57.4 hrs
L2W: 535.043K, zdc: 47.9 hrs - up to run R10101039
230530: 2.03877M 1.15 pb^-1 18.0 hrs
230531: 3.58288M 6.63 pb^-1 65.4 hrs
L2W: 664.353K, zdc: 48.4 hrs - up to run ~R10102051
230530: 2.03877M 1.15 pb^-1 18.0 hrs
230531: 4.2393M 7.85 pb^-1 72.9 hrs
L2W: 787.693K, zdc: 48.9 hrs - up to run 10103018, April 13, "the last Monday" 6 am, end of fill
230530: 2.03877M 1.15 pb^-1 18.0 hrs
230531: 4.94531M 9.16 pb^-1 82.6 hrs
L2W: 915.625K, zdc: 49.5 hrs - END of pp500 run:- the final numbers:
230530: 2.03877M 1.15185 pb^-1 17.9769 hrs
230531: 5.14626M 9.53011 pb^-1 85.3289 hrs
L2W: 953.314K, zdc: 49.7892 hrs
- ended with run 10088065:
- Mask more broken TPC RDO:power on the following 3 RDOs should be off:
S4N1, S11N1, S6N5 They are masked off anyway in Run Control. - Tonko: Sector 20, RDO 6 and saw
that one of the 3 RDO buses developed a short on pins 0 and 1
causing the RDO to fail.
This corresponds to about 1/3 of the RDO, 12 FEEs total which
I masked out in the bad pad file.
I asked the Crew to restore the RDO in the Run Control since it
is better to have 2/3 of the RDO than none. - April 7, Tuesday, TPC space charge correction computed by Hao, linear model
- Gene: use OSpaceZ2 OGridLeak3D options in BFC, beginTime=2009-02-01,
- Gene: use OSpaceZ2 OGridLeak3D options in BFC, beginTime=2009-02-01,
- TPC HV exchange: Just a quick note to follow up on the story about exchanging Anode HV power
supply modules. About 2 weeks ago, Dana and I replace the power supply
modules for Inner sector 4, and outer sector 7. These sectors (as well as
others) were tripping in the high rate beam. The theory was that perhaps
the hardware was 'weak' and so these channels were tripping more often than
the others.
I think we now have enough experience to say that this is not true. Even
with the new power supply modules, sector 4 continues to trip as does sector 7. - Vernier scan is going to start soon, cdev recorder activated:
- April 8, Wednesday, Tonko: Sector 5, RDO 5
same features as the others: power seems OK, DDL link works but the FPGA does not configure.
This was also at start of fill although a pedestal run just 5 minutes earlier was successful.
It's becoming odd that all 4 dead RDOs are on the West side...
- Apex started 8 am, they seems to use existing fill 10508
- Carl: Last night the FMSled-FPE and FMSfast triggers were activated in
production2009_500Gev_c. Could you please modify the trigger
configuration to route the data from these two triggers to an express stream? - Hank: I have just swapped VPDE 3 and VPDE 14 with VPDW 3 and VPDW 14 in
scalers 3 and 4 after run10098051. Lets see if the peculiar behavior follows the input or stays in board 3 - Stephen: Loaded new L2 pedestals after run10098046 pedestal file is from run 10098040
- April 9, Thursday, BSMD is left out of data taking since 3 am , because of some problems , shift crew work till 4 am and gave up, Oleg/Stephen were not contacted.
- last run w/ BSMD 10099034,
- example of run w/o BSMD 10099073, FMS triggered like crazy at ~400 Hz
- 9:40 am, a private communication: "It is a nightmare was in going on in the STAR control room...!"
- 10am meeting:
- Jeff, BSMD issues are related to L0
- Oleg suggested to reboot L1 and BSMD is again fully functional, use runs after ~10099100.
- Tonko: if you are talking about TPX being dead at 500Hz, well this translates to 500 MB/s (!) with this beam so don't be surprised. Eventbuilders can't stand this huge data rate.
-
Carl: Bit-1 select not right: runs 10099055-1099187 ?? Jan: how do we compute xsection for BHT3
-- The Bit-1-Select label changed to -1. In this case, the default
value (7) enables the Upsilon L0 trigger. That increased the L0
trigger rate for L2-W and BHT3. This was restored to the correct
value (1) prior to run 10099188.
more Problem with run parameters on Apr 9 (day 99)Date: April 10, 2009 2:46:12 PM EDT To: startrig-hn@orion.star.bnl.gov
- aaa
- dsfd
-
April 11, Saturday, Run 10101075 should be a GOOD run. Crew marked it as bad by mistake.Zhangbu sent an email try to fix it.
- April 12, Sunday
- Run 10102067 - special run on request by Bill Christie
using configuration 'vernier_scan' and trg+daq+btow in w/ trgs zdc, bbcmin and bht3 triggers
we stop at 15 k events as per request and wait for scaler save = ok - Run 10102068 - another 'vernier_scan' configuration as per previous run
BUT, this time w/ BTOW out! stop at 9.8k (as per Bill) and save scalers = ok
- Run 10102067 - special run on request by Bill Christie
- new L2ped using run : run10102069.l2ped.log ,it will be used for run 10102076, see for run l2ped10102082.pdf taken w/o beam & new peds - they are terrible. Once beam is back all should look nice.
- c) we took new BSMD peds (in expert mode) ... CR8 peds
now all "green" due to failed module now holding voltage, (CR2 and CR3 still have some failed module component)
d) In trying to "fix" BTOW 0X19 crate ... ended up compromising
TP 92 and 93 which then had to be masked and compensated for (see Wenqin report in log). - 1: BTOW tube with soft ID1433 is masked out as being hot.
2: Board 4 in crate 0x19 (masked out already) ihas wrong trigger mask, and I think it causes 4 wide red lines in the QA plots.
------------------------------------
Added on: Sunday, April 12, 2009 - 05:04:45 PM
By: wenqin Xu
------------------------------------
The wrong trigger mask word draws 16 red lines at softID: 825-828, 845-848, 865-868, 885-888, and they match the 4 wide red lines in BTOW ADC QA plot.- Wenqin Xu
- Hi, Jan
I just checked the plots carefully and match their softId with the FEEs
and checked their voltages. The following is my understanding of the
situation:
Page position softID module # my understanding
23 middle 901-1050 7 known as dead, VCE=0
26 top 4651-4850 32 known as dead, VCE=0
26 2nd from top 5251-5400 36 known as dead, VCE=0
41 top right 16051-16200 108 new, low VCE
The others (before page 44) you mentioned are not holes, i.e. nothing at
all, but with less gain or fluctuation, I think. Please correct me, if
anyone disagrees with me.
After page44, I think plots are repeated, now with respect of phi, instead
of eta. So the problems should not be counted again. (I think there is a
typo in the captions of those plots for phi.)
The 4 dead modules are confirmed in the FEE sum plot also, and no other
dead modules seen there.
As usual, questions and suggestions are welcomed.
Regard
wenqin
Comparing to the new run we just took:
http://online.star.bnl.gov/bemcMon/st_physics_10102094_ped.pdf
Hi,
I am not sure, but I think it is because we lost one board in crate 0x19
this afternoon because its trigger mask cannot be configured. One board
has 32 towers, or 32 spots on the mentioned plot. This approximately
matches the hole on the mentioned plot.
The time I did this was 16:12, and I had an entry on the shift log:
http://online.star.bnl.gov/apps/shiftLog/logForShift.jsp?day=04%2F12%2F2009&period=Day&B1=Submit
The other board masked out days ago is in crate 0x1b. So totally 2 boards,
totally 2 holes.
Regards
wenqin
Hi,
I'm used to see one hole (which can't be fixed) at X=20-25, Y=20-30 on
page 4, e.g. today in the morning:
http://online.star.bnl.gov/L2algo/l2btowCal/l2btowCal10102050.pdf
But now we have 2 holes, page 4, new hole is at x=24-28, Y=38-46
http://online.star.bnl.gov/L2algo/l2btowCal/l2btowCal10102098.pdf
April 13, Monday,
Last physics run 10103042
config:vernier scan at the end of 500 GeV run: 10103044
Last ~3 days of 500 GeV data taking:
For some reason starting with fill 10525 L2ped started getting enough statistics for me to generate pedestals and status tables again, so those are what Matt has already loaded. To fill in the gap I created the pedestal files I mentioned to Matt above. If you look at my monitoring page however, you'll see that there is a jump in the number of bad barrel towers (day 85 ~100 bad towers then day 99 ~125 bad towers). But since we don't have status tables for this time period we don't know when they went bad. So I'm not sure how you'll account for this in your analysis, but just thought you should be aware of it.
- balewski's blog
- Login or register to post comments