Sort by:[Date]

Dead material in Front of BEMC

Using a GEANT macro provided by Jan, I looked to see how many radiation of lengths of material there are in front of the BEMC.

g2t kinematics check

I was thinking that quite a bit of the info that we store in e.g. StPythiaEvent is redundant. I wrote a class that stored a much smaller set of persistent information (parton 4-vectors) and calculated all other derived quantities (s, t, u, cosTheta, …) on-the-fly.

As a check I compared the values from the g2t table that were stored in StPythiaEvent with the values I calculated (using the 4-vectors in StPythiaEvent). My calculated versions of the Mandelstam variables and cosTheta were identical to the g2t values, but I got rather different values for x1, x2, and hard pT. Here’s a plot:

figure

The fact that there are ~no counts in the top-right and bottom-left quadrants makes it look like either the reco or the g2t values are neglecting initial state radiation. That’s not supposed to be the case.

I’m calculating x1 and x2 as

x1 = (pT1*exp(eta1) + pT2*exp(eta2)) / 200
x2 = (pT1*exp(-eta1) + pT2*exp(-eta2)) / 200
If you know of something else PYTHIA is doing that I’m overlooking, please let me know.

Update 11 July 2008

I did a more systematic investigation into the formulas that are used to calculate the hard scattering kinematics stored in the g2t_pythia table, specifically mand_s, mand_t, mand_u, hard_p, cos_th, bjor_1, and bjor_2. I tried a few alternatives (taking parton masses into account, etc.) and plotted the fractional deviation between my reco quantities and the ones stored in the table. If the maximum fractional deviation was less than 1E-6, I took that difference to be a result of floating-point imprecision and the formula to be correct. Here’s what I came up with:

mand_s = (p1+p2)^2
mand_t = -0.5 * s_hat * (1-cos_theta)
mand_u = -0.5 * s_hat * (1+cos_theta)
hard_p = sqrt(Q2)
Q2 = t_hat * u_hat / s_hat

cos_th: polar angle of parton 3 after boosting to c.m. of hard scattering and rotating so that p1 and p2 are traveling along the z-axis

bjor_1, bjor_2: I don’t have a complete definition of these in terms of the parton 4-vectors. It’s true that x1*x2 = s_hat/s, where s is exactly 40000 GeV^2, so given x1 it’s possible to calculate x2. PYTHIA evaluates x1 and x2 by boosting to the c.m. frame of the hard scattering, rotating so that the collision occurs along z-axis, and then boosting back along z. The trouble is determining the value of this final boost (it’s not just the magnitude of the initial boost). From Section 9.2 of the PYTHIA manual:

Since the initial-state radiation machinery assigns space-like virtualities to the incoming partons, the definitions of x in terms of energy fractions and in terms of momentum fractions no longer coincide, and so the interacting subsystem may receive a net longitudinal boost compared with naïve expectations, as part of the parton-shower machinery.

For completeness, note that the following formulas will not always give results that match the g2t_pythia quantities exactly. Differences are usually small (less than 0.5%), but can be larger especially in the case of heavy quark production.

t_hat = (p1-p3)^2 = (p2-p4)^2
u_hat = (p1-p4)^2 = (p2-p3)^2
Q2 = (m3^2 +m4^2)/2 + (t_hat * u_hat - m3^2 * m4^2)/s_hat

and of course, any aforementioned expressions for x1 and x2 in terms of parton 4-vectors are also unreliable.

missing TPC FEE in UPGR15

 The following command:

 nex;dcut CAVE x 0.0 2.0 2.0 0.05 0.05

as used to produce TPC Y-Z cross section.

Looks to me THE TPC electronic card are missing in UPGR13.

Orpheus Mall Research Log

CuCu Spectra Pi- Spectra

Mapping for SMD and PRS

Attached are extensive mapping files for the BSMD and BPRS. These maps were generated for 2008-03-04 10:30:54, which is the beginning of run 9064017.

problem with BTOW geometry in Geant

 This is zoom in on the eta=1 gap between BTOW & ETOW.

Tracking Efficiency : details

I : Selection of good wafers

Goal : 

BEMC Towers with Good Status but Bad Gain

The attached lists have the towers with status == 1 and gain == 0 and gain_status == 0. There are 130 from 2006 and 143 from 2005.

 

Cutting on SMD information in single particle MC

The two attached .pdf files show the results of a preliminary set of endcap cuts including the SMD.

DB Consistency Analysis / Maatkit

Intro

I’ve been struggling to keep our MIT database mirror synchronized with the BNL master, and I wanted to write about some steps we (STAR) might take to do a better job of keeping our slaves synchronized. The problem that I’m worried about is the situation where, according the the Heartbeat Page, a slave is up-to-date with robinson, but in reality the slave somehow become silently corrupted.

Initial Checksums

It turns out that this problem is actually pretty common. I found what seems to be a slick set of utilities called Maatkit that will calculate checksums of every table in a DB and look for differences between replicated DBs. I ran mk-table-checksum on the following servers

  • robinson.star.bnl.gov:3306
  • db01.star.bnl.gov:3316
  • db02.star.bnl.gov:3316
  • db03.star.bnl.gov:3316
  • rhig.physics.yale.edu:3316
  • star1.lns.mit.edu:3316

and attached the output below as initial_checksum.txt. None of the slaves in that list are fully in sync with robinson according to those checksums. db02 and db03 come much closer than the others; in db02’s case only Calibrations_tracker.{schema,ssdHitError} are different from robinson. I verified for a few cases that differences actually do exist in the tables when the checksums don’t match.

Resynchronization and Results

Maatkit also provides a utility (mk-table-sync) which will determine and optionally execute the SQL commands needed to re-sync one server against another. I used this utility to re-sync star1 against robinson — it takes quite a while. I then ran mk-table-checksum and mk-checksum-filter again, and attached the output as checksum_after_sync_filtered.txt. Unfortunately, robinson and star1 still don’t have perfect agreement according to the checksums. I’m not sure what tables like Nodes, NodeRelation, and tableCatalog do, but I noticed the following “physics” tables still did not have matching checksums:

  • Calibrations_ftpc.ftpcGasOut
  • Calibrations_rich.trigDetSums
  • Calibrations_svt.svtPedestals
  • RunLog_onl.beamInfo
  • RunLog_onl.biFitParams
  • RunLog_onl.starMagOnl
  • RunLog_onl.zdcFitParams

Now comes a weird part: I tried a SELECT * FROM biFitParams on star1 and on robinson, and in that case there was no difference in the output. I’m not sure how the checksums could still be different in that case. I also tried diff’ing the starMagOnl tables; the only difference I found was one server reported some currents as “-0.0000000000” and the other one reported “0.0000000000” (no leading minus sign).

Summary

So, I realize the results aren’t 100% conclusive, but I still believe that these maatkit scripts would be a valuable addition to STAR’s QA toolkit. They definitely helped me correct a variety of real problems with our MIT database mirror.

It’s straightforward to take the output of mk-table-checksum and mk-checksum-filter and programmatically put it on a webpage; in fact, Mike Betancourt wrote up a little sed script to do just that. I think we should try scheduling ~daily checksum calculations and posting any discrepancies to the Heartbeat webpage automatically.