General

This will contain general information about STAR as well as a container for diverses activities (operations, STSG, STAR management document, etc ...) all of public nature.

 

 

Collaboration

[Under work - see this link in the interim]

Acknowledgements

LATEX version (escaped special characters):

We thank the RHIC Operations Group and RCF at BNL, the NERSC Center at LBNL, and the Open Science Grid consortium for providing resources and support.  This work was supported in part by the Office of Nuclear Physics within the U.S. DOE Office of Science, the U.S. National Science Foundation, National Natural Science Foundation of China, Chinese Academy of Science, the Ministry of Science and Technology of China and the Chinese Ministry of Education,
the Higher Education Sprout Project by Ministry of Education at NCKU, the National Research Foundation of Korea, Czech Science Foundation and Ministry of Education, Youth and Sports of the Czech Republic, Hungarian National Research, Development and Innovation Office, New National Excellency Programme of the Hungarian Ministry of Human Capacities, Department of Atomic Energy and Department of Science and Technology of the Government of India, the National Science Centre and WUT ID-UB of Poland, the Ministry of Science, Education and Sports of the Republic of Croatia, German Bundesministerium f\"ur Bildung, Wissenschaft, Forschung and Technologie (BMBF), Helmholtz Association, Ministry of Education, Culture, Sports, Science, and Technology (MEXT), Japan Society for the Promotion of Science (JSPS) and Agencia Nacional de Investigaci\'on y Desarrollo (ANID) of Chile.


UNICODE version (native UTF characters):

We thank the RHIC Operations Group and RCF at BNL, the NERSC Center at LBNL, and the Open Science Grid consortium for providing resources and support.  This work was supported in part by the Office of Nuclear Physics within the U.S. DOE Office of Science, the U.S. National Science Foundation, 
National Natural Science Foundation of China, Chinese Academy of Science, the Ministry of Science and Technology of China and the Chinese Ministry of Education, the Higher Education Sprout Project by Ministry of Education at NCKU, the National Research Foundation of Korea, Czech Science Foundation and Ministry of Education, Youth and Sports of the Czech Republic, Hungarian National Research, Development and Innovation Office, New National Excellency Programme of the Hungarian Ministry of Human Capacities, Department of Atomic Energy and Department of Science and Technology of the Government of India, the National Science Centre and WUT ID-UB of Poland, the Ministry of Science, Education and Sports of the Republic of Croatia, German Bundesministerium für Bildung, Wissenschaft, Forschung and Technologie (BMBF), Helmholtz Association, Ministry of Education, Culture, Sports, Science, and Technology (MEXT), Japan Society for the Promotion of Science (JSPS) and Agencia Nacional de Investigación y Desarrollo (ANID) of Chile.


Data Management plan

Find below a boiler-plate Data Management plan for grant and proposal purposes. XXX is your group name.



Data Management Plan

1. The data generated in the course of the proposal are created at the Scientific Data & Computing Centre (SDCC), formally known as the RHIC and ATLAS Computing Facility (RACF) at Brookhaven National Laboratory (BNL), and under the supervision of the STAR Collaboration.  These data are shared with the entire STAR Collaboration and Brookhaven National Laboratory.  STAR maintains a complete backup archive of all data generated at the SDCC. These data are available from STAR to the public according to the data management plan of STAR and BNL.

2. All the results of analysis carried out here at XXX are made available on public websites that are maintained and backed up by STAR.  These data files are machine-readable and provide all the data and figures of all published work.  In addition, an analysis note is prepared for each publication that includes a complete description of all the steps used to analyze the data and a complete list of the computer codes including macros to produce the published figures.  The computer codes and figure macros are verified by other STAR collaborators prior to submission for publication.

3. The XXX group follows all requirements given by BNL, STAR, and the SDCC concerning data management.

4. The XXX group maintains its training in Cyber security both at BNL and here at the [Your institutions].


STAR Funding Agencies

STAR is supported by several funding agencies in countries on four continents, in addition to significant support from STAR member institutions. Here is a list of STAR funding agencies, with links to their web pages.

Office of Nuclear Physics,
Office of High Energy Physics,
Office of Science, Department of Energy
United States
National Science Foundation,
United States
Frankfurt Institute for Advanced Studies,
Germany
Institut National de Physique Nucleaire et de Physique des Particules,
France
Fundação de Amparo à Pesquisa do Estado de São Paulo,
Brazil
  Ministry of Science and Technology,
Russia
National Natural Science Foundation,
China
Chinese Acadamey Of Sciences (CAS),
China
Ministry of Science and Technology (MoST),
China
Department of Atomic Energy(DAE),
India
Department of Science and Technology,
India
Grant Agency of Czech Republic,
Czech Republic
Research Council of Fundamental Science & Technology (KRCF),
Korea
Research Foundation,
Korea

Nominated Speakers 2019-2024

Conference Date Abstracts due Speakers(s) (talk uploaded/not uploaded/not uploaded but direct invite) Webpage

Excited QCD 2019

Jan 30 – Feb 3

 

1 talk (Jana Bielcikova)

https://indico.cern.ch/event/720726/

WWND2019 Jan 6-12   11 talks (Hiroki Kato, Yota Kawamura,Shuai Yang, Pengfei Wang, Subhash Singha, Jaroslav Adam, Niseem Magdy, Saehanseul Oh, Matt Posik, Zhenzhen Yang, Takafumi Niida) https://indico.cern.ch/event/766194/
HFM-2019 Mar 18-20  

1 plenary talk (Md. Nasim)

https://indico.cern.ch/event/760884

MoriondQCD 2019

Mar 23-30

 

2  plenary talks (

JaroslavBielcik

ZhenyuYe)

 http://moriond.in2p3.fr/2019/QCD/

Fairness 2019

May 20-24

 

1 talk (LeszekKosarzewski)

https://indico.gsi.de/event/7684/overview

DIS2019

April 8-12   1 talk (Petr Chaloupka)  

RHIC and AGS AUM 2019, jet workshop

June 4-7   2 talks (sanhanseulOh, Kolja Kauder)

https://www.bnl.gov/aum2019/

RHIC and AGS AUM 2019, spin workshop June 4-7   4 talks (Qian Yang, Renee Fatemi, Jinlong Zhang, Amilkar Quintero) https://www.bnl.gov/aum2019/
RHIC and AGS 2019  June 4-7   3 plenary talks (James Daniel Brandenburg, Amilkar Quintero, Irakli Chakaberia) https://www.bnl.gov/aum2019/

SQM2019

June 10-15   2 plenary talk (Guannan Xin, Jie Zhao)

https://sqm2019.ba.infn.it

IS2019 June 24-28   1 plenary talk (Shengli Huang)

https://www.bnl.gov/is2019/

 quarkonium workshop 2019  May 13-17    2 speakers (Qian Yang, Jaroslav Bielcik)  
 Hadron 2019  August 16-21    1 talk (Jinhui Chen)  http://hadron2019.csp.escience.cn
 APS DPF 2019  July 29-Aug 2    2 talks (Bill Schmidke, Niseem Magdy)  
 ICNFP2019  August 21-29    3 talks (Ahmed Hamed, Jaroslav Bielcik, David Tlusty (plenary))  https://indico.cern.ch/event/754973/ 
 Pacific Spin 2019  August 27-30    1 talk (Akio Ogawa)  
 ISMD 2019  September 9-13    1 plenary talk (Qinghua Xu)  https://indico.cern.ch/event/761800
 QFTHEP’2019  September 22-29    1 plenary speaker (Grigory Nigmatkulov)  http://qfthep.sinp.msu.ru
 QM2019  November 4-9    1 plenary speaker (Zhangbu Xu), 5 parallel speakers (Jie Zhao, Ashish Pandav, Hanseul Oh, Muhammad Usman, Yanfang Liu)  https://indico.cern.ch/event/792436/
 RHIC and AGS open forum meeting 2019  October 15     1 talk (Carl Gagliardi)  
 workshop on forward physics 2019  November 18-21   1 talk (David Kapukchyan)   https://indico.cern.ch/event/823693/ 
 workshop MPI at the LHC 2019  November 18-22    1 talk (Jana Bielcikova)  https://indico.cern.ch/event/816226/ 
 2020 Santa Fe Jets and Heavy flavor workshop  February 3-5    1 talk (Alex Jentsch) http://www.cvent.com/events/2020-santa-fe-jets-and-heavy-flavor-workshop/event-summary-d48f98a525a74998a367d58d8bbb4362.aspx 
 Moriond QCD 2020  March 28-Aprl 4    2 talks (Audrey Francisco, Shusu Shi)  http://moriond.in2p3.fr/2020/QCD/
 WWND2020  March 1-7    6 talks (Prithwish Tribedy, Annika Ewigleben, Daniel Brandenburg, Shengli Huang, Adam Gibson-Even, Liang Yue)  https://indico.cern.ch/event/841247/ 
 Transversity 2020 May 25-29       https://urldefense.com/v3/__https://agenda.infn.it/e/transversity2020__;!!P4SdNyxKAPE!XpSPnkOPXZ2-VJ3f8-k3TCdW_b7kVIVN92Dra1rQaVN9OoIzk-JnO8BwezHr$ , postponed
 nucleus 2020  May 26-30    1 speaker (Alexey Aparin)  https://events.spbu.ru/events/nucleus-2020?lang=Eng postponed to Oct 11-17 2020 online
 CPOD2020 May 4-8    2 speakers  https://indico.cern.ch/event/851194/  , postponed
 RHIC and AGS users meeting 2020, small system workshop  June 9    1 speaker (Roy Lacey)  postponed - becomes an online meeting 
RHIC and AGS users meeting 2020  June 9-12   3 plenary speakers ()  postponed - becomes an online meeting 
Hard Probes 2020   June 1-5   1 plenary (Zaochen Yi), 2 parallel (Zhen Wang)  https://indico.cern.ch/event/751767/ , becomes a online meeting
         
 FAIR-NICA centrality flow workshop  Aug 24-28    2 plenary (Shinlchi Esumi, Shusu Shi)   http://indico.oris.mephi.ru/event/181/
 ICNFP 2020  Sep 4-12   1 plenary (Ahmed Hamed)   https://indico.cern.ch/event/868045/ 
Online AUM 2020  Oct 22-23   2 Plenary(run report: Daniel Cebra, highlight: Raghav Kunnawalkam Elayavalli):  
BES Workshop: Takafumi Niida, Sam Heppelmann, Yang Wu 
2 Cold QCD (future plans: Scott Wissink, highlights: Nickolas Lukow)
High Pt: Isaac Mooney 
https://www.bnl.gov/aum2020/
 
CPOD 2021 Mar 15-19   1 plenary (Shinlchi Esumi), merged abstract: Prabhupada Dixit  https://indico.cern.ch/event/985460/
Moriond 2021  Mar 27-Apr 3    2 speakers (Yu Zhang, Saehanseul Oh)   http://moriond.in2p3.fr/2021/QCD/ 
DIS 2021 - WG6 parallel session  Apr 12-16   Oleg Tsai https://www.stonybrook.edu/cfns/dis2021/
GHP 2021 Apr 13-16   Sooraj Radhakrishan https://indico.jlab.org/event/412/
SQM 2021 May 17-21   1) Updates on flavor production from STAR (Talk on May 17th): Sooraj Radhakrishnan 
2) Recent milestones from STAR: New developments and open questions (Talk on May 17th): Rongrong Ma 
3) STAR Detector Upgrades (Talk on May 22nd): Chi Yang  
parallel Talks: #189: Shenghui Zhang, #196: Yan Huang, #172&191: Moe Isshiki
https://indico.cern.ch/event/985652/
AUM 202    June 8-21    Plenary: Matt Kelsey, Leszek Kosarzewski 
 
 https://www.bnl.gov/aum2021/
Nucleus 2021    Sep 20-25    Grigory Nigmatkulov  https://events.spbu.ru/events/nucleus-2021?lang=Eng, https://indico.cern.ch/event/1012633/
ICNFP2021   23 Aug-2 Sep   1.All heavy ions: Ahmed Hamed
2.All spin physics: Amilkar Quintero 
3. BES talk:  Toshihiro Nonaka 
4. Mini topical review, High pt & jets: Nihar Sahoo 
5. Mini topical review, Heavy Flavor: Te-Chuan Huang 
6. Mini topical review, FCV: Chuan-Jian Zhang 
  https://indico.cern.ch/event/1025480/page/22286-workshops-icnfp-2021
QCD-N2021   Oct 4-8    Salvatore Fazio   https://indico.fis.ucm.es/event/16/
12th MPI at LHC WH5    Oct 11-15    Yue-Hang Leung  https://indico.lip.pt/event/688/overview (combined STAR and PHENIX Results)
DNP-Special Isobar Session   Oct 11-14    Sergei Voloshin   http://web.mit.edu/dnp2021/ 
WWND 2022    Feb 27-Mar 5     Yuanjing Ji “Hypernuclei production at STAR”
Niseem Magdy "CME search with isobar collisions”
Xu Sun “STAR’s Forward Upgrade Program”
 https://indico.cern.ch/event/1039540/
 CPHI 2022  Mar 7-12    Xiaoxuan Chu  
 Moriond 2022  Mar 19-26    Tomas Truhlar   https://moriond.in2p3.fr/2022/
 Quark Matter 2022   April 4-10    Prithwish Tribedy (Plenary)
Merge talk 1 - Yue-Hang Leung 
Merge talk 2 - Aswini Kumar Sahoo 
Merge talk 3 - Tong Liu 
Merge talk 5 - Yu Hu 
Merge talk 6 - Haojie Xu 
Merge talk 7 - Ashik Ikbal 
Merge talk 9 - Ke Mi 
Merge talk10 - Ziyue Zhang   
https://indico.cern.ch/event/895086/ 
 DIS 2022   May 2-6    generic invitation for speakers to submit abstracts  https://indico.cern.ch/event/1072533/

  Transversity 2022

May 23-27

 

Will Jacobs

  https://agenda.infn.it/event/19219/

RHIC AUM 2022

 June 7-10

 

Evan Finch, Zilong Chang, Takafumi Niida

  https://www.bnl.gov/rhicagsaum/

  SQM 2022

June 13-17

 

Barbara Trzeciak (Plenary)

Parallel talks that didn't have an identified speaker or merged talks:
Krishan Gopal,
Yingjie Zhou,
Sameer Aslam,
Arushi Dhamija

  https://sqm2022.pusan.ac.kr

  HF-WINC

 July 14-16

 

Sonia Kabana

  https://indico.cern.ch/event/883427/

  CIPANP 2022

Aug 29-Sep 4

 

Niseem Magdy

  https://agenda.hep.wisc.edu/event/1644/

  ICNFP 2022

Aug 30-Sep 11

 

Barbara Trzeciak,
Gang Wang,
Qian Yang

 https://indico.cern.ch/event/1133591/

  QNP 2022

 Sep 5-9 

 

Yu Hu,
Xiaofeng Wang,
Jie Zhao

 https://indico.jlab.org/event/344/

  PIC 2022

  Sep 5-9 

 Tbilisi State University

Sonia Kabana

  https://indico.cern.ch/event/1158815/abstracts/

 Nuclear Science and Technologies

 Sep 26-30

  (Only identified who the organizers could invite - not STC selected talk)

Grigory Nigmatkulov

 https://indico.alem.cloud/event/1/

 EuNPC 2022

 Oct 24-28

 

Daniel Kikola

  https://indico.cern.ch/event/1104299/

 MPI 2022

 Nov14-18

 

Xiaoxuan Chu

 https://indico.ift.uam-csic.es/event/14/

 CPOD 2022

 Nov28-Dec2
2022

 

Md Nasim

https://indico.cern.ch/event/1186000/ 

 ICPAQGP-2023

 Feb7-10 2023

 

Nihar Sahoo
Subash Singha

 https://events.vecc.gov.in/event/19/overview

 Moriond QCD

 March 25-April1

 

Veronica Verkest
Zachary Sweger

 https://moriond.in2p3.fr/2023/QCD/

 CERN BES Seminar 2023

 

 (Only identified who the organizers could invite - not STC selected talk)

Yue-Hang Leung

 

 Hard Probes 2023

 March 26-31

 

Nihar Sahoo (Plenary Talk) Joern Putsche gave the talk on behalf of Nihar and STAR due to visa issues. 

 https://wwuindico.uni-muenster.de/event/1409/

 IS2023

 June 19-23

 

Nicole Lewis

https://indico.cern.ch/event/1043736/ 

 Hadron2023

 June 5-9

 

Lori Vassiliev

 https://agenda.infn.it/event/33110/

IWHSS2023 

 June 26-29

 

 https://indico.cern.ch/e/iwhss2023

 

 

 

 

 Lund Jet Plane 2023

Jul 3-7 

 

Monika Robotkova

 https://indico.cern.ch/e/LundJetPlane2023

 21st Lomonosov Conf

 Aug 24-30

 

https://lomcon.ru/ 

QuarkMatter23

 Sep 3-9

 

Rosi Reed (Plenary)
Merge talk 1 - Zuowen Liu
Merge talk 2 - Chengdong Han
Merge talk 3 - Baoshan Xi
Merge talk 4 - Xiaoyu Liu
Merge talk 5 - Aditya Prasad Dash
Merge talk 6 - Yuan Su
Merge talk 7 - Ishu Aggarwal
Merge talk 8 - Matthew Harasty

  https://indico.cern.ch/event/1139644/

 SPIN 2023

 Sep 23-29

 

Ting Lin

 https://indico.jlab.org/event/663/

 ISMD 2023

 Aug 21–26

 

1. Baryon Junctions - Zebo Tang

2. Flow - Vinh Luong

3. Jets -


 https://indico.cern.ch/event/1258038/

RHIC AUM 2023 Aug 1-4  

1. Run 23 Report - Kong Tu

2. STAR Highlight - Rongrong Ma

3. Forward Upgrade - Xilin Liang

4. Forward Tracker - Zhen Wang

https://www.bnl.gov/rhicagsaum/ 

Baldin ISHEPP XXV

Sep 18-23, 2023 

 

STAR Overview: Artem Korobitsin

http://relnp.jinr.ru/ishepp

International Symposium on Physics in Collision (PIC 2023)

October 10-13, 2023 

 

1. Mini-review (15+3 min) on spin physics of STAR - Jae Nam

2. Mini-review (15+3 min) on non-spin STAR highlight results - Vipul Bairathi

https://indico.cern.ch/event/1190468/

CFNS Workshop

Nov 6-9, 2023 

 

1. Recent STAR heavy flavor and quarkonia study highlight. (15+5) - Wei Zhang

2. Recent STAR spin studies. (15+5) - Dmitry Kalinkin

 

https://indico.cfnssbu.physics.sunysb.edu/event/110/

MPI@LHC

Nov 20-24, 2023 

 

WG 1. STAR measurements sensitive to Hadronization and UE/MPI - Leszek Kosarzewski

WG3. Measurements of azimuthal anisotropy in small systems at RHIC (summary) - 

 

https://indico.cern.ch/event/1281679/

UPC International Workshop

Dec 11-15, 2023 

 

1. An overview talk on the latest UPC results. - David Tlusty

2. Two-photon production of dilepton pairs in UPC (Session 2). - Wangmei Zha

3. Two-photon production of dilepton pairs in events with nuclear overlap (Session 5). - Zhang Li

https://indico.cern.ch/event/1263865/

Excited QCD 2024

Jan 14-20, 2024

 Nov 15, 2023

Exotic Hadrons or Heavy Ions (20+10) - Gavin Wilks

 

https://www.benasque.org/2024eqcd/

QWG 2024

Feb 26-March 1, 2024

 

Recent results from STAR in heavy-ion collisions (15+5) - Nihar Sahoo
 

Recent results from STAR in p+p collisions (15'+5') - Md. Nasim 

https://web.iisermohali.ac.in/dept/physics/QWG2024/index.html

Moriond 2024

March 31-April 7, 2024

 

Recent Cold QCD Results from STAR - Ting Lin
 

Recent Highlights from STAR BES Phase 2 - Dylan Neff

https://moriond.in2p3.fr/2024/QCD/

DIS 2024 April 8-April 12, 2024 Feb 9, 2024

Overview of STAR Spin and 3D Structure - Xiaoxuan Chu

Recent heavy flavor measurements from RHIC (15'+5') - Veronika Prozorova

https://lpsc-indico.in2p3.fr/event/3268/overview

CPOD 2024 May 20-24, 2024 March 1, 2024

STAR Overview - Sooraj Radhakrishnan

Net Proton in BES-II - Ashish Pandav

Measurement of Deuteron-Lambda Correlation for STAR - Yu Hu

https://conferences.lbl.gov/event/1376/

BOOST 2024 July 29-August 2, 2024 May 19, 2024  Diptanil Roy

https://agenda.infn.it/event/37093/

SQM 2024 June 3, 2024  February 23, 2024

STAR Highlights (28'+2') - Qian Yang

Joint abstract: Light Nuclei at BES-II - Yixuan Jin

https://sqm2024.iphc.cnrs.fr

Transversity 2024 June 3-7, 2024  

Transversity and TMDs - Bassam Aboona

IFF at 200 and 500 GeV - Bernd Surrow

 https://agenda.infn.it/event/38132/

AUM 2024 June 13-14, 2024  

STAR Run 2024 Report - Jaroslav Adam

STAR Highlights - Yicheng Feng

STAR Open Heavy Flavor - Ondrej Lomicky

STAR Heavy Quarkonia - Wei Zhang

STAR Flow Highlights - Priyanshi Sinha

STAR Vorticity - Xingrui Gou

STAR Spin alignment - Diyu Shen

STAR AI/ML - Hannah Harrison-Smith

https://www.bnl.gov/rhicagsaum/

CAARI-SNEAP 2024 July 21-26, 2024 May 2, 2024

Latest progress in high energy nuclear physics and the future Electron-Ion Collider opportunity -

https://caari-sneap.com/

INT 2024 August 19-23, 2024  

STAR heavy ions perspectives - Grigory Nigmatkulov

 https://www.int.washington.edu/programs-and-workshops/24-2b

EXA/LEAP 2024 August 26-30, 2024 April 14, 2024

(Anti)hypertriton production - Hao Qiu

https://www.oeaw.ac.at/smi/talks-and-events/exa/exa-leap-2024

New Trends in High Energy Physics 2024 September 2-5, 2024 June 16, 2024

1. Collective properties of the nuclear matter at extreme conditions - Vipul Bairathi (direct invite)

2. Correlations and Fluctuations - Daniel Wielanek

3. Spin Physics - Ken Barish

https://indico.cern.ch/event/1353482/

ICNFP 2024 August 26-September 4, 2024 July 5, 2024

STAR Highlight (non-spin) - Barbara Trzeciak

STAR Spin Highlight - Ting Lin

STAR Correlations/fluctuations Overview - Yu Hu

STAR FCV Overview - Yicheng Feng

STAR HP Overview - Tanmay Pani

STAR LFS/UPC Overview -

STAR Cold-QCD Overview - Xilin Liang

https://indico.cern.ch/event/1307446/

Diffraction 2024 September 8-14, 2024 May 31, 2024

UPC+photonuclear RHIC results & prospects (20'+5') - Wangmei Zha

PDFs at low/high-x & saturation - Zilong Chang

https://indico.cern.ch/event/1354173/

Hard Probes 2024 September 22-27, 2024 May 31, 2024

STAR Highlight Overview (30') - Isaac Mooney

https://indico.cern.ch/event/1339555/

Zimányi School 2024 December 2-6, 2024  

STAR Overview - Hanna Zbroszczyk

http://zimanyischool.kfki.hu/24/

HF-HNC 2024 December 6-11, 2024  

Latest measurements of heavy flavor production - Dandan Shen

https://indico.cern.ch/event/1429980/

GHP 2025 March 14-16, 2025  

(Plenary) Results from Beam Energy Scan II - Shusu Shi

https://indico.jlab.org/event/868/

QM 2025 April 7, 2025 Nov 15, 2024

(Plenary) STAR Highlight - 

https://indico.cern.ch/event/1334113/

Operations

Welcome to the STAR Operations Home Page


Daily operations meeting at 10am on Zoom:  https://bnl.zoomgov.com/j/1603605873pwd=NVk1aWs2OW5yb01rTzNtODhiaUJtQT09
(Zoom Meeting ID: 160 360 5873 Passcode: 110789)
Communications for STAR control room /shift/operation related on Zoom: https://bnl.zoomgov.com/j/1605144596?pwd=N3ExMDh3Q2txK0FxYTBVTzg4N0hHZz09
(Zoom Meeting ID: 160 514 4596 Passcode: 726787)

STAR online page: https://online.star.bnl.gov
STAR operations email list: https://lists.bnl.gov/mailman/listinfo/star-ops-l

Select the menu for information relevant to STAR Operations. 



Run-22 pp510 guides (SL desk printouts)




BBC/ZDC/VPD HV system (LeCroy1440) communication problem

Loss of communication with LeCroy1445A.

It often happens when LeCroy was turned off, or loses its power due to power dip.
Indications are:
- "bbchv" app on sc3 shows black on/off
- "bbchv" app doesn't update readout voltages/current
- Cannot turn on/off from "bbchv"
Solution is:
Go to SC5 computer. There should be "Restore LeCroy Communication" window:
   
LeCroyRestartLeCroyRestart

Follow the instructions on the window.
If the window is not on SC5 - open a terminal and type ./scripts/restartLC.py
Make sure you have bermuda terminal open in the next monitor 
(If not, open terminal, type "sys@bermuda" command. Password is the same as one in shift leader's binder for SC5 sysuser)


Controlled/Restricted Access Requests

  • The period coordinator/shift leaders should have a list with controlled access requests.
  • Leave your phone number if you want to be called for unscheduled controlled access.
  • There are only 8 keys for controlled access.
  • Always let the shift leader know when you go in and come out.
  • Make a note in the elog about the work that was performed.

  • Next maintenance day on Thursday, March 9, 2017 (7:30am-3:30pm).

Detector Readiness Checklist for Cosmics

­prodution_pp200long2_2TOF+MTD+ETOW+BTOW+ESMD+BSMD+GMT+FPS+PP+IST+>>Feb. 27, 2018<<

Detector Readiness Checklist (Cosmic Data Taking, 2018)

1)           Once Per Day

A)     Reboot bdb.starp.bnl.gov (see section 3 in slow controls manual)
B)     Noise run for TOF/MTD pedAsPhys_tcd_only with TRG+DAQ+TOF+MTD (4M events, takes about 5-6 minutes)
C)     EPD IV scan (can be in parallel with cosmics, mark run in elog)

2)           Pedestals once per Shift

A)     Take pedestal_tcd_only with TRG+DAQ+TPX+ITPC+ETOW+TOF+ETOF+MTD+GMT+FCS (1 event, run control will issue additional events automatically)
B)     Take pedestal_rhicclock_clean with TRG+DAQ (1k events)

3)           Cosmic Data Taking

A)     Check detector states for cosmic data taking
B)     Take CosmicLocalClock with TRG+DAQ+TPX+ITPC+ETOW+TOF+ETOF+MTD+GMT+L4 (30 minutes)
C)     Laser runs every 4 hours (warm up in advance, 4k events)

 

Notes:

ETOF HV/FEE is still under expert control. In case the magnet needs to be ramped/trips, call experts!

Status of ETOF may change, check with outgoing shiftleader and elog!



 

 

Detector Readiness Checklist for current run


Detector Readiness (old)

 (old - attachements hidden)

Detector States Spreadsheet


Detector States (old)

 (old)

Notes from Operations Meeting

October 6th, 2024

Recap of the Past 24 Hours:

  • MCR provided 56x56 Au+Au collisions around 5 AM (with crossing angle but no rebucketing yet). Collisions were stable with low background. We focused on setting up global timing, made significant progress, and took one physics run. The beam was dumped at 9:25 AM. sPHENIX had a 20-30 minute access, followed by APEX and RHIC rebucketing.
  • ZDC AND rate was 3 KHz. We started with tune_AuAu_2024. Global timing looked good, with TAC numbers for BBC (blue background was high, 2x collision rate). VPDs and TOF were turned on, starting with zdc-tac, then switching to zdc-mb.
  • Ran tune_AuAu_2024 with TPC, TOF, BTOW, and L4 (no L4 events).
  • Ran setup_AuAu_2024 with TPC, TOF, BTOW, and L4 (L4 events recorded, but no plots). Reached out to Jeff to resolve L4 issues, as plots were needed for global timing. Continued running without L4 until Jeff fixed the issue.
  • The evp /a disk was 60% full; Jeff cleared files. l4Cali and l4Disp are still down. Diyu was contacted and will work on the calibration server within 2-3 hours.
  • After receiving the L4 plot, global timing looked good, with only a 0.5 ns offset.
  • Vz was off by ~20 cm. MCR wasn't seeing the real-time vertex, but this was fixed and improved.
  • BBC setup plan: TAC is a bit off, but BBC looks good. Shift crew was instructed to take 5 runs per BBC HV setting, adjust config, collect 50k events, and log runs as BBC HV scan in ESL.
  • Oleg initiated a cal_scan, timing within 1 ns. He will review the data later.
  • Akio loaded BBC HV data.
  • EPD experts took initial scans for TAC adjustment, with a follow-up planned.
  • Run 25280025 was the first Au+Au 200 production setup; fastoffline was requested and is in progress (no forward tracker, eTOF, or GMT yet).
  • Forward trackers (FST/sTGC) will be turned on after rebucketing (post-10 AM).

Issues Encountered:

  • No major issues. BCW issue was solved after Tim accessed and fixed the create.
  • l4Cali and l4Disp are still down, Diyu is working on it.

Plan for the Day (RHIC & STAR):

  • Access now; RHIC will handle rebucketing.
  • Forward trackers will be turned on mid-next fill (when background is low).
  • EPD work is ongoing; experts will provide updates on EPD and calorimeters.

Looking Ahead:

  • First Au+Au collisions at STAR (56x65) occurred this morning; one physics run was completed, and the next fill is expected in a few hours (delays likely).
  • This fill wasn’t useful for sPHENIX; they’re taking access. They’ll request 6x6 for background studies when stable. RHIC is planning to install a 56 MHz RF on Monday for a narrow vertex. Stable beams are needed by Wednesday.
  • Commissioning plan: compile a list of detector experts (EPD: Mike needs a call when the next fill starts, FST & sTGC experts need a call mid-fill, FST: time bin, sTGC performance will be checked and followed up, VPD will use run23 files).
  • Update the detector readiness checklist (shift crew to continue with setup_AuAu_2024: TRG+DAQ+iTPC+TPX+TOF+BTOW+ETOW+L4+FCS).
  • For 6x6, we’ll use minbias for regular production; fastoffline will be requested.
  • Shift leader for the next owl found; the day shift will run with one DO.

Saturday, October 5th, 2024

Urgent Issues:

  1. BCW VME crate repair: Tim was informed, but repairs are on hold due to RHIC's beam development
  2. Shift leader missing: We have no shift leader for the next owl shift.

Recap of the Past 24 Hours:

  • Collected cosmic data throughout the day as no beam development work was done (despite efforts).
  • MCR called for APEX at 1:30 am but reported issues at 3 am.
  • Encountered L0 trigger not starting and BCW components not working, which were removed.
  • Yellow Abort Kicker issue continues.
  • Blue & Yellow injection and ramp setup continues, updates expected in a few hours.

Encountered Issues:

  • BCW configuration error and FPGA failure:

    • BCW and BCW_DSM2 nodes are missing from the component tree.
    • The BCW VME crate requires repair (need to discuss access and coordination).
    • Tim will replace the board and power cycle it locally, which should take 30 minutes (we have spare boards).
    • Follow-up with Tim; we can run min-bias without the fix and should take opportunistic access if available.
  • L0/L1 not responding:

    • Power cycling crate 62 failed initially.
    • Prashanth went in manually, and David helped resolve it.
  • 2000+ timeouts:

    • Stopped the run.
    • Shift leader consulted Akio, Jeff, and Hank.
    • Jeff tried multiple power cycles of L0/L1, and it eventually came back.
  • EPD hot tiles:

    • Restarted the run, and the issue was gone.
    • Maria will investigate further.
  • EVB23 issue:

    • EVB23 is still out of the run, with EVB22 and EVB24 expected to return soon.
    • Awaiting further updates from Jeff.

Plan for the Day (RHIC & STAR):

  • Continue Yellow Abort Kicker work.
  • Blue & Yellow injection and ramp scheduled for the evening.
  • Ramp development overnight.
  • Continue taking cosmic runs when no beam is available; switch to APEX otherwise.

Looking Ahead:

  • First Au+Au collisions at STAR expected Sunday evening (possible delay). First fill could be 56x56.
  • October 4-8: sPHENIX requests 111x111 initially, then 6x6 for background study when stable.
    • RHIC plans to install 56 MHz RF on Monday for narrow vertex; stable beam required before Wednesday.
  • STAR needs/plans for bunches/luminosity:
    • TPC group requests the same crossing angle for 6x6 (one or two fills based on sPHENIX needs).
  • Detector commissioning plan:
    • Compile a list of experts to call.
    • VPD will use Run23 files.
    • EPD requires calibration runs (contact experts).
    • FST (time bin), sTGC, and others need timing adjustments (trigger and global timing with JH/Akio), followed by cal-scan.
    • Follow up with experts.
  • Update the detector readiness checklist:
    • Use tune_AuAu_2024 (zdc_mb), setup_AuAu_2024, and production_AuAu_2024.
  • 6x6 production: We can use minbias and regular production settings.
  • Shift sign-up issues: Dan is resolving these with various STAR institutions; we may run with one DO. Follow-up with Daniel/Pavel recommended.

Friday, October 4, 2024

Recap of the Past 24 Hours:

  • Cosmic data: Collected cosmic data throughout the day due to no beam development work.
  • Minor issues: Solved with help from experts.
  • Yellow Abort Kicker: Still awaiting new tools to address the issue.
  • RHIC power supply: Power supply work was completed; RF conditioning is ongoing.

Encountered Issues:

  1. Run control GUI crash:

    • The GUI crashed and disappeared at the start of the run.
    • Re-established connection using XLaunch, which helped restart the GUI.
  2. ETOW configuration failure:

    • Crate 1 and 2 experienced a failure.
    • After multiple unsuccessful reloads, ETOW was removed.
    • Expert rebooted the crates, but DAQ mon still showed errors (Crate 1). The shift crew followed suggestions from experts, but the issue remains unresolved.
  3. TRG L0 issue:

    • Run failed to start due to a TRG L0 error.
    • Power cycling the VME trigger crate resolved the issue.
  4. FCS dead:

    • The run was stopped multiple times (more than 60 seconds, three times consecutively) due to FCS dead errors.
    • DOs performed a power cycle on the VME trigger crate, which fixed the issue.
  5. EVB23 issue:

    • EVB23 was dead in the component tree, preventing the run from starting.
    • Awaiting further action from Jeff (work in progress).

Plan for the Day (RHIC & STAR):

  • Yellow Abort Kicker work and g9-blw-ps work.
  • Ramp development overnight.
  • Blue & Yellow injection setup overnight.
  • Continue collecting cosmic runs when no beam is available.
  • Perform the usual pedestal runs (FCS LED and others).
  • Possibility of controlled access for 4–6 hours (open for interest).

Looking Ahead:

  • First Au+Au collisions at STAR expected Saturday overnight (possible delay); first fill may be 56x56.
  • October 4–8: sPHENIX requests 111x111 initially, followed by 6x6 for background study once stable.
    • RHIC plans to install 56 MHz RF on Monday for narrow vertex. Stable beam is needed before Wednesday.
  • STAR needs/plans for bunches/luminosity:
    • TPC group requests the same crossing angle for 6x6 (one or two fills driven by sPHENIX needs).
  • Detector commissioning plan:
    • Compile a list of experts to call.
    • VPD will use Run23 files.
    • EPD requires calibration runs (call experts).
    • FST (time bin), sTGC, and others (timing adjustments, trigger, global timing with JH/Akio), then cal-scan. Follow up with experts.
  • Update detector readiness checklist:
    • Use tune_AuAu_2024 (zdc_mb), setup_AuAu_2024, and production_AuAu_2024.
  • 6x6 run can use minbias and regular production settings.
  • Shift sign-up issues: Being resolved; Dan is working with various STAR institutions. May need to run with one DO.

Thursday, October 3, 2024

Recap of the Past 24 Hours:

  • We ran cosmic data all day as no beam development work was done.
  • Ongoing Yellow Abort Kicker work.
  • RHIC power supply work completed, but RF conditioning will require more time.

Encountered Issues:

  • iTPC/TPC issues overnight: Power-cycling RDOs 1-4 did not resolve the problem. Currently, iTPC sector 1 RDOs 1, 2, 3 are masked, and iTPC sector 1 RDO 4 is not masked. (this is not correct according to Tonko) ZhengXi will address this.
  • EVB23 in the component tree prevented the run from starting yesterday. The shift crew removed it. Jeff will look into it, and I will follow up.
  • A hot tile on the EPD was found yesterday. Maria is investigating it, and I will follow up.

Plan for the Day (RHIC & STAR):

  • Continue work on DX training.
  • Yellow Abort Kicker work.
  • RF conditioning of storage cavities.
  • Blue injection setup scheduled for tonight.
  • Yellow injection setup planned for tomorrow.
  • Continue taking cosmic runs.
  • Take usual pedestal runs (FCS LED and others).

Looking Ahead:

  • First Au+Au collisions at STAR: Scheduled for Saturday overnight*, with the first fill possibly 56x56. (There may be a delay.)
  • Oct 4-8: sPHENIX requests 111x111 initially, then 6x6 for background study once stable. RHIC plans 56 MHz RF installation on Monday for narrow vertex; stable beams needed before Wednesday.
  • STAR needs/plans for bunches/luminosity? Remove crossing angle for 6x6 (to be determined).
  • Commissioning plans for detectors: VPD & EPD need calibration runs, FST (timebin), sTGC, and others (all timing adjustments, trigger, and global timing). Follow up with experts.
  • Plan for eTOF during this Au+Au run should mirror p+p configuration.
  • Update the detector readiness checklist (setup_auau200_2024_minbias, tune_auau200_2024_minbias, auau200_2024_minbias). Jeff will manage this, and the shift leader will review.
  • Urgently need a shift leader and detector operator for next week’s owl shift. Dan Cebra is unreachable; Frank will follow up.

Wednesday, October 2, 2024
 

Recap of the Past 24 Hours:

  • We ran cosmic data all day as no beam development work was done.
  • Akio implemented the initial file for the BBC high-voltage run for Au+Au 2024.
  • David Tlusty updated the TOF control for TOF West sector 5, resolving the previous issue.
  • ESMD was turned off as per Jacobs' request and will be excluded from future runs.
  • MCR postponed the Yellow Abort Kicker repair to today.

Encountered Issues:

  • EVB23 in the component tree prevented the run from starting. The shift crew removed it, and Jeff will look into it.
  • One hot tile on the EPD was identified. Maria is investigating.

Plan for the Day (RHIC & STAR):

  • No significant progress on DX training yet; it may start this evening or tomorrow morning.
  • Yellow Abort Kicker work is still ongoing.
  • RF conditioning: Most parts are running smoothly, and most components are conditioned.
  • Blue injection ramp setup is scheduled for tonight, with Yellow injection tomorrow.
  • We continue to run cosmics but will prioritize expert needs.

Looking Ahead:

  • First Au+Au collisions at STAR are expected Friday overnight, with the first fill potentially 56x56.
  • Oct 4-8: sPHENIX ramp-up/stochastic cooling, aiming for 111x111 (or possibly fewer), followed by 6x6 for background studies (planned for Tuesday) once silicon and TPC are stable. RHIC plans to install 56 MHz RF, and we'll explore the use of 6x6 (Gene will advise).
  • STAR needs/plans for bunches/luminosity: Remove the crossing angle for 6x6 (to be determined).
  • RHIC DX training and Yellow Abort Kicker work access will be completed.
  • eTOF plan for Au+Au should remain the same as for p+p runs.
  • Update the detector readiness checklist (setup_auau200_2024_minbias, tune_auau200_2024_minbias, auau200_2024_minbias). Jeff will follow up.

Tuesday, October 1, 2024
Recap of Past 24 Hours:

  • Yesterday we had access till from 8:00 till ~16:00
  • Took cosmics for rest of the day, went to apex around 4 am and then cosmics again ~6 am. 
  • CAS brought the magnet down at the beginning of the access for some cleaning work. Ramped up at about noon.
  • TOF: problem with W5 POS HV is fixed, so we are back to 100% ToF acceptance
  • eToF: replaced TCD fanout box, works now (?)
  • FST status checked, no further refilling needed
  • After the access, STAR detector was put in the APEX/beam development mode. Again took cosmic when no beam related work was performed
Encountered issues
  • BSMD:  CAEN HV system failed; no spare parts (controller boar) are available. BSMD is off now and not included in the run configuration. Most probably BSMD will be unavailable till the end of the Run24.
  • MXQ crate went off without apparent reason, fixed by power-cycling the crate
Plan for the Day (orders may be swapped)
  • Opportunity for restricted access from 8:00 – 18:00 (DX training for collider), some DX training for tomorrow (access opportunity tomorrow), please let us know if you want to use this
  • We continue running cosmic but prioritize expert’s needs
  • Separate work for Pulsed Power group for yellow abort kicker (need 6 hours of work)
  • RF conditioning of storage cavities overnight
  • Injection and ramp possible (APEX mode tonight !!)
Looking Ahead
  • First Au+Au collisions at STAR expected Wednesday (Oct. 2nd) night or Thursday (Oct. 3rd)
  • RHIC power supply not ready, DX training, access for yellow board kicker, crossing angles setup 
  • What is the plan for eTOF during this AuAu run ?
  • Update the detector readiness checklist
  • Geary is asking about David’s schedule, IOC related work for TOF, Alex will contact him and cc Geary

 

Monday, September 30, 2024

Recap of Past 24 Hours:

  • Overall: Physics and data taking, the last p+p store ended at 08:00,

Encountered Issues:

  • BSMD HV did not turn ON. Shift team contacted Oleg, and then removed BSMD from the run configuration (starting at about midnight).A few common issues with TPC RDOs, Trigger/RunControl, and sTGC

Plan for the Day:

  • access opportunity till ~16:00
  • CAS brought the magnet down for some cleaning work. It needs coordination with Prashanth.
  • Plans for the access:
    • Inspection if FST cooling refill is needed
    • TOF: W5 POS HV cable to be moved from 7.5 to 3.0
    • Time permitting: eTOF: check TCD connection to the rack
  • After the access:
    • we plan to bring the magnet up
    • STAR detector stays in the APEX/beam commissioning mode

Looking Ahead

Tuesday, Oct. 1st: We expect a few hours of access opportunity
The last 3 weeks of the Run24 will be Au+Au collisions.
First Au+Au collisions at STAR expected on October 3rd.

Sunday, September 29, 2024

Recap of Past 24 Hours:

  • Overall: Physics and data taking

Encountered Issues:

  • DAQ: EVB24 started causing problems at about 8 am today, removed from the run configuration after consulting Tonko. Jeff will work on solving this problem later today.
  • ESMD: MAPMT FEE  4P1 turned red for a while, the problem fixed itself automatically
  • A few common issues with TPC RDOs and Trigger/RunControl
  • B1U polarization measurements not available for the current fill (#35153) due to broken target. The issue showed up at the end of fill #35152.

Plan for the Day:

  • Physics 

Looking Ahead

Monday, Sept. 30: Maintenance day

  • the last p+p store ends at 08:00,
  • then ~8 - 9 hour of maintenance
  • Plans for the access:
    • We plan to keep the magnet on
    • Inspection if FST cooling refill is needed

Tuesday, Oct. 1st: We expect a few hours of access opportunity
The last 3 weeks of the Run24 will be Au+Au collisions.
First Au+Au collisions at STAR expected on October 3rd.

 

Saturday, September 28, 2024

Recap of Past 24 Hours:

  • Most of the time: Physics and data taking
  • Problems with FCS ECAL. Akio and  Tim made access at 19:40, replaced one of MPOD modules for FCS ECAL power supply, reconfigured and turned back on.
  • We are running with ToF without West Sector 5, so with 90% of ToF acceptance (and 90% of expected ToF multiplicity). Jeff modified IDs for triggers that include ToF Multiplicity to keep track of runs with this state.

Encountered Issues:

  • Evening shift noticed 2 new cold tiles in the EPD West ADC plot. Performed "Reboot All" according to the EPD cold tile response manual, but that did not solve the issue. It should be resolved after running pedestal_rhicclock_clean. Run pedestal_rhicclock_clean was postponed to the end of the fill, and done in the morning.
    • Note to shift teams: please follow the EPD instruction carefully: After “Reboot All”, one needs to run pedestal_rhicclock_clean, even if we have beams.
  • A few common issues with TPC RDOs, Trigger/RunControl

Plan for the Day:

  • Physics 

Looking Ahead

Physics for the rest of the week
Monday, Sept. 30: Maintenance day

  • the last p+p store ends at 08:00,
  • then ~8 - 9 hour of maintenance

The p+p run ends on Sept. 30. The last 3 weeks of the Run24 will be Au+Au collisions.
First Au+Au collisions at STAR expected on October 3rd.

Friday, September 27, 2024

Recap of Past 24 Hours:

  • One hour access for sPHENIX at 10 am, then some problems with injection.
  • back to Physics at 13:49, then data taking
  • EEMC HV monitoring is in "paused" mode to reduce the number of GUI crashes

Encountered Issues:

  • TOF: "over current" alarms in TOF HV West Sector 5, followed by the "Positive - HV West Sector 5" internal trip.
    • Shift team contacted Rongrong, no success with resolving the problem during the night. Geary was trying to fix this issue today in the morning, but no success so far. We can run with the current ToF state (without TOF West Sector 5),
      which gives 90% of ToF acceptance, so 90% of ToF Multiplicity. We need to change the IDs for triggers that include ToF Multiplicity to keep track of this situation. To fix the problem, we need a short access.
  • TOF+MTD gas system lost communication with Slow Control.
    • Alexei solved the problem by restarting the program that provides the connection to the Slow Control database.
  • Run Control disappeared, solved after contacting Jeff (RTS02 machine died)
    • Jeff will work on preparing a spare machine in the case RTS02 dies for good.
  • FCS trigger was running very high, which caused FST 100% dead in DAQ (two cases). Resolving the 1st one required  “reboot all”, the 2nd - fixed after restarting the run.
  • EEMC GUI communication problem, solved by the shift team.
    • Note to shift teams: please read carefully the email from Will Jacobs on Sept. 26 about EEMC monitoring
  • A few common issues with TPC RDOs, Trigger/RunControl

Plan for the Day:

  • Physics 

Looking Ahead

Physics for the rest of the week
Monday, Sept. 30: Maintenance day

  • the last p+p store ends at 08:00,
  • then ~8 - 9 hour of maintenance

The p+p run ends on Sept. 30. The last 3 weeks of the Run24 will be Au+Au collisions.
First Au+Au collisions at STAR expected on October 3rd.

 

Thursday, September 26, 2024

Recap of Past 24 Hours:

  • APEX (10:00 – 16:00), then machine development (16:00 -  18:00)
  • back to Physics at 19:16, then smooth data taking

Encountered Issues:

  • A few common issues with TPC RDOs, Trigger/RunControl, and sTGC
  • EEMC communication problem, resolved by the shift team
  • Shift team noticed single hot channel in Sector-22 from the TPC Sec. 22 charge per pad plot, but no corresponding peak in the RDO-bytes plot. The were unable to identify the RDO, and not sure if it's a known issue.
    • It is safe to continue data taking in such a case, but pay attention to TPC dead time.
  • FCS trigger was running very high, which caused FST 100% dead in DAQ. Resolved after restarting the run.
    • Note to shift leaders: Please pay attention to trigger rates. If the trigger rates are red, restart the run.

Plan for the Day:

  • Physics 

Looking Ahead

Monday, Sept. 30: Maintenance day

  • the last p+p store ends at 08:00,
  • then ~8 - 9 hour of maintenance

The p+p run ends on Sept. 30. The last 3 weeks of the Run24 will be Au+Au collisions.
First Au+Au collisions at STAR expected on October 3rd.

Wednesday, September 25, 2024

Recap of Past 24 Hours:

  • Overall: Physics and data taking
  • TPC Grid Leak: David reset the HV power supply,  TPC Grid Leak is back to normal conditions

Encountered Issues:

  • A few common issues with TPC RDOs, Trigger/RunControl, and sTGC
  • EEMC communication problem, solved after calling the expert

Note to shift teams: this EEMC communication problem, and how to resolve it, is discussed in the EEMC manual. Please check it before
calling the expert.

Plan for the Day:

  • APEX from 10:00 to 16:00,
  • then 2 hours of machine development (16:00-18:00) (test of a new
    polarimeter target)
  • return to Physics at about 19:00   

Looking Ahead

Monday, Sept. 30: Maintenance day

  • the last p+p store ends at 08:00,
  • then ~8 - 9 hour on maintenance

The p+p run ends on Sept. 30. The last 3 weeks of the Run24 will be Au+Au collisions.

 

Tuesday, September 24, 2024

Recap of Past 24 Hours:

  • Overall: Physics and data taking
  • 1 hour access for sPHENIX at 2 pm, shift team took Cosmics during the access
  • We are running with very stable beam condition, so there is no need to take bgRemedyTest_2024 runs. They were removed from Detector Readiness Checklist

Encountered Issues:

  • TPC Grid Leak: sectors 20 and 24 lost their voltages. Alexei tried to fix the problem, but without success. Also intermittent alarms for
    sector 23.

    • We will request ~0.5 hour access today, after the current fill, to fix the problem (change the board)
  • FCS trigger rates were very high, which caused FST going 100% dead in DAQ. Restarting a run resolved the problem (cleared the bad FCS conditions)
  • TPC gas PI-10 yellow alarm, shift team contacted Alexei, the alarm
    cleared by itself.
  • A few common issues with TPC RDOs, Trigger/RunControl
  • sTGC ROB #12 power cycled.

Note to shift leaders: please read carefully the "TPC reference plots and issue problem solving" manual. If there is a single hot channel,
there is no need to stop the run.

Plan for the Day: Physics

Looking Ahead

Wednesday, Sept. 25.:

  • APEX from 10:00 to 16:00,
  • then 2 hours of machine development (16:00-18:00) (test of a new
    polarimeter target)
  • return to Physics at about 19:00   

Monday, Sept. 30: Maintenance day

  • the last p+p store ends at 08:00,
  • then ~8 - 9 hour on maintenance

The p+p run ends on Sept. 30. The last 3 weeks of the Run24 will be Au+Au collisions.

Monday, September 23, 2024

Recap of Past 24 Hours:

  • Overall: Physics and smooth data taking
  • Prashanth worked on the sTGC gas system. Shift team needs to monitor sTGC PT-1 pressure through database since its alarm is disabled, and call Prashanth if its pressure drops below 19 psi.

Encountered Issues:

  • A few common issues with TPC RDOs
  • Yellow alarms for sTGC PT-2 and PT-3 pressure.  Shift team consulted Prashanth. The pressure should be monitored. Call Prashanth if the pressure drops down to 14-15 psi.

Plan for the Day: Physics

Looking Ahead
- plan for tomorrow: Physics
- APEX on 9/25
- No maintenance on 9/25
- Maintenance moved to 9/30 (start of Au run)
- The p+p run ends on Sept. 30. The last 3 weeks of the Run24 will be Au+Au collisions.

Sunday, September 22, 2024

Recap of Past 24 Hours:

  • Overall: Physics and smooth data taking
  • Alexei refilled GMT gas bottle

Encountered Issues:

  • A few common issues with TPC RDOs and Trigger/RunControl
  • sTGC gas interlock alarm went off (PT1 gauge showing a high pressure). Shift team contacted Prashanth and turned down the pressure regulator on the gas tank outside the TPC gas room.
  • sTGC ROB #8: was power cycled and after that its current was lower than expected (0.5A vs 0.7A). Shift team consulted Prashanth. After starting new run, the issue resolved itself.
  • FCS: DEP05:1 failed. The shift team called Tonko, restarting the run solved the issue.
    Note to shift leaders: Please read error messages carefully. In this case, one should try restarting the run one more time before calling the expert, as the error message describes.

Plan for the Day: Physics

Looking Ahead
- No maintenance on 9/25
- Maintenance moved to 9/30 (start of Au run)
- The p+p run ends on Sept. 30. The last 3 weeks of the Run24 will be Au+Au collisions.

 

Saturday, September 21, 2024

Recap of Past 24 Hours:

  • Most of the time Physics and smooth data taking
  • Beam dumped at 19:00 for access for sPHENIX
  • Shift team took Cosmics during the access
  • Back to Physics and data taking at 0:30

Encountered Issues:

  • A few common issues with TPC RDOs and Trigger/RunControl
  • BSMD: HV GUI lost connection, resolved after consulting the expert (Oleg)
  • Important: If there is an issues with BSMD: do not wait for BSMD, remove BSMD from the run configuring and start the run. Then try to resolve the problem.

Plan for the Day: Physics

Looking Ahead
- Physics during the weekend
- No maintenance on 9/25
- Maintenance moved to 9/30 (start of Au run)
- The p+p run ends on Sept. 30. The last 3 weeks of the Run24 will be Au+Au collisions.

 

Friday, September 20, 2024

Most of the time Physics and smooth data taking

Encountered Issues:

  • Shift team was confused what should be the state of the VME #55 GP-Pulser.  It must be off!
  • Trigger: Critical error message for run #25264020 and #25264018: "l1 CRITICAL l1Task l1Task.C:#281 Got an invalid token: 0"
    • Shift team consulted Akio and continued data taking. Jeff will investigate this issue. If this error happens again, it is safe to ignore, but please make a note in the Shift log.
  • Inconsistent temperature alarms on VME #51 and #55
  • A few minor issues with TPC RDOs and configuring MXQ_QTD
  • sTGC PT-2 and PT-3 gas alarms went off, shift team consulted Prashanth
  • Shift team stopped the laser run when noticed peaks in the TPC ADC vs time plots.
    • During laser runs, one should expect spikes in the TPC ADC distribution, so no reason to stop the run. Please consult the "TPC reference plots and issue problem solving" manual.

Plan for the Day: Physics

Looking Ahead
- Physics during the weekend
- No maintenance on 9/25
- Maintenance moved to 9/30 (start of Au run)
- The p+p run ends on Sept. 30. The last 3 weeks of the Run24 will be Au+Au collisions.

Thursday, September 19, 2024
 

Recap of Past 24 Hours:

  • APEX till 23:30, then ~1 hour access for sPHENIX
  • Unexpected beam loss during the first fill.
  • Physics started at 3 am.
  • eTOF: Do not include eTOF in a run until further notice from experts.

Encountered Issues:

  • RunControl:
    • Could not stop run from the run control. GUI froze. Resolved after calling Jeff.
    • Configurations waiting for more than three minutes, runs were notstopping. Resolved by "reboot all" multiple times after consultingJeff.
  • Hot channels in TPC
    • If there is a single hot channel, there is no need to stop the run. Flemming has sent to STAR operations email list his detailed suggestions for dealing with the hot channels in TPC:
    • Spike in RDO bytes
      • Identify the RDO
      • Is it a single hot channel?
        • Yes: continue the run and reboot at end
        • No: (for example: Noisy full RDO, Noisy FEE or parts)
          • Stop run and reboot, unless it’s a laser run stop when there are >3000 events recorded.
      • Example from last few  days
        • 261017 sector tpx 12-6  single hot pad
        • 263013 (last night) ~16 channels in TPX 18-6

Plan for the Day: Physics

Looking Ahead

  • No maintenance on 9/25
  • Maintenance moved to 9/30 (start of Au run)
  • The p+p run ends on Sept. 30. The last 3 weeks of the Run24 will be Au+Au collisions.


Wednesday, September 18, 2024


Recap of Past 24 Hours:
  • Access for sPHENIX at 10:30 am yesterday, then some problems with cryo
  • Shift team took cosmics during the accesses,
  • Detector Readiness Checklist was updated (updated eTOF procedure, a note about GMT)
  • Physics started at 5:16 pm, data taking till 7:30 am this morning
  • Akio uploaded new FCS Ecal HV files

Encountered Issues:
  • Run control GUI froze completely. Fixed by Jeff by terminating the Run Control GUI remotely on his end, then the shift leader started the GUI.
  • A few minor issues with TPC RDOs.
  • Error configuring MXQ_QTD Node -> Shift team powercycled VME crate MXQ, which resolved the issue.
Plan for the Day
  • APEX from 8:00 till 23:00.
  • Dmitry will do maintenance on databases.
  • Tonko will work on RDOs and TCD.
  • Access for sPHENIX at 23:00 to work on cooling of MVTX (estimated duration: 30 minutes)
  • Back to physics at about midnight
Looking Ahead
  • No maintenance on 9/25
  • Maintenance moved to 9/30 (start of Au run)
  • Pp run ends on Sept. 30. The last 3 weeks of the Run24 will be Au+Au collisions.


Tuesday, July 30, 2024

  • TPC sector 11 channel 8 anode tripped, clear the trip manually

  • spike for the TPX sector 5 in this run, power-cycled

  • sTGC ROB10 channel 1 fluctuating

1st power dip (~11:00):

  • Lost the control for all the detectors. We got the global and sTGC interlock alarm, lost the power to the platform, lost water, network, MTD gas, air blowers.

  • PMD Power was off in the interlock page

  • powercycle the VME EQ4 crate 

  • All back on ~ 14:33

  • TPC LASER controls were reset, we see pico drivers alive now.

2st power dip (~15:10):

  • MCW is running, but magnet tripped 

  • reset the FST cooling

  • Turn on the BBC and ZDC. VPD is responding, so turned them off

  • BCE in red in the component tree, then fixed

  • Will recovered EEMC 

  • BTOW, BSMD, ETOW, ESMD, FCS have been tested and ready to go. (18:04)

  • Magenet tripped (18:41)

  • restored control of TOF/MTD/eTOF HV and LV.

  • pedAsPhys run with TOF+MTD+TRG+DAQ, now only TOF tray 117 error remains, Rongrong masked out this tray.

  • Rebooted crate #63 (MXQ), rebooted trg/daq/l2. Now this run finished properly without any errors.

  • Magenet tripped again (21:41)

  • unable to turn on the VPD

Current issue:

  • “Bermuda” computer has a problem, Wayne had an access but couldn’t fix it. Copy the disk to a new one now, it is running ~30% at ~9:30. Wayne is also preparing a new desktop for this in the meantime. 

  • MCW was lost due to blown fuses on the 80T chiller (for the MCW). Water is back online. Only MCW was lost, everything else is fine. (~6:20),

  • Lost the communication of TPC air blower (didn’t trigger the global interlock). - David & Tim

  • VME processor in Crate80 initiallizing correctly, but not communication. But right now is BTOW is back

  • GLW lost communication, need to be checked during access/ or David can re-establish com. - recovered - Tim

  • Can't start run due to mxq_qtd: qt32d-52 VP003 QT32D is dead - 63 crate - Hank will call control room

  • mix, mix_dsm2 - 69 crate - Need a physical power-cycle - Tim

  • Laser can be turn on but can’t be tuned

To shifters:

 

  • Shiftleaders please pass all the informations to the next shift, walkthrough all the problems happened during the shift, and the remaining problems

  • check the weather before processing the recovery, just in case there will be another thunder storm/power-dip happens soon

  • clean the control room

Monday, July 29, 2024

Status & Issues:

  • TPC: 

    • #25210022,  a spike in the TPX RDO_bytes plot for sector 4. Power-cycled. 

    • #25211009, ITPC RDO S04:1, power-cycled

    • #25211016, iTPC RDO iS17:2, TPX S13:4, power-cycled

    • TPC Anode Trip, Sector 11 channel 8, 5 times - apply 45V down, will also remind the SC expert

  • Laser:

    • The laser can turn on but is not able to tune. Prashanth will try to fix it during the next access (Monday afternoon/Wednesday). 

    • Now the procedure for laser run is: 1) Warm up the laser in advance for 5 minutes and do not try to tune the laser. 2) After 5 minutes, start the laser run. Do not tune the laser during the laser run.

  • Trigger:

    • #25210037 couldn’t start the run, rebooted TRG+DAQ 

    • Carl did a test for the new trigger configuration. Need to do a quick check at the end of this fill

  • sTGC:

    • Red alarm from sTGC Air blower AC failure, the problem cannot be fixed during the run, need to have access. It triggered sTGC interlock after about 20 minutes.  DOs powered down the HV & LV. Shifters switch the bypass key from the left side to the right side following the instruction from David.

    • David had short access ~ 18:30, then the sTGC blower AC was restored. (~18:50)

    • sTGC ROB 10 channel 1 (sTGC::LV2::114::1::imon) keeps making yellow alarms repeatedly and quickly disappears. (~01:12). 

  • TOF:

    • Prashanth & Jim restarted TOF/MTD archiver from the TOF machine in the gas room. Changed SF6 cylinder and Freon cylinder.

  • FCS:

    • Error in “FEE count 44, expected 55; FEE count 33, expected 55 -- restart run. If the problem persists contact expert”. Then got a “configuration error”. DOs power-cycled the FEEs and reboot the FCS in the run control. But still have the same issue. Called Oleg.

    • a problem with FCS ECal North. One of the MPOD PS boards shows 'outputfailure Maxs' all V and currents are at 0. It is not clear if it is a failure of MPOD itself, or if it is caused by one of the ECal FEE. 

    • Gerard found that FCS power channel u4 configuration readback values were wrong, looked like all defaults. Likely, this channel got a radiation upset. Reconfiguring the MPOD with setup script 'setup_FCS_LV.sh' restored correct operation

    • FCS: DEP08:3 failed, restart the run fixed the problem

  • Network:

    • MQ01 server: Disconnected the MQ01 server, unplugged all 4 disks from the MQ01 server, installed in the backup server (labeled in STARGW1), and then connected the backup server online with Wayne’s help. After rebooting the server, things seem to be working fine. DB monitoring is also back online.

    • TOF/MTD Gas monitoring: went to the gas room, and started the EpicsToF program. The PVs start to update online. Alarms cleared.

    • EPD: Tim forced a reboot of TUFF1 and 2. Now the EPD GUI reports "connected". Working fine now.

Schedule & Plans:

 

  • cosmic 13:00-19:00 request by sPHENIX, access: AC. FCS S 10, VME 62, BBC East 1, the fan of TOF(east vpd); reboot scserv (Wayne), TPC Laser (Prashanth)

  • Physics for the rest of the time

  • Low luminosity tomorrow or Thursday (6x6)

Sunday, July 28, 2024

Status & Issues:

  • TPC: 

    • #25209041, iTPC S13:1, DOs power-cycled it

    • #25209057, TPX  S02:6, DOs power-cycled it

    • #25209065, 100% TPX/ITPC deadtime for over 1 mintue

    • #25210015, iTPC S09:3, DOs power-cycled it, but still get the same error, masked it out

    • #25210020 - TPX S22:04, higher value in the TPX Totall bytes per RDO, power-cycled it after the run

  • MTD:

    • #25209043, some hot strips in the MTD strips vs BL (CirticalShiftPlots->MTD-<StripsvsBL) plot

  • Network;

    • 19:15, EPD, EPD: TUFF[2] dead - check TUFF if RUNNING!; 19:25, lost the connection; QA plots look okay

    • 00:00, TOF/MTD Gas; lost the connection; The computers in the gas room running ok, it is just the online database stop updating

    • DOs visit the gas room once an hour, check the gas values in-person, Alexei provided some inputs on which value we can look for

    • Lost the control of laser for camera 1 and 3

    • call from Wayne. He said the online monitoring network issue is caused by MQ01 computer. He let us to reboot the MQ01 and check the net work connection of dashboard1 computer in the DAQ room. The MQ01 is dead, will try to replace the power supply. 

  • Others:

    • DAQ rate is a little bit high

    • TPC pulser crate #55 is in an unknown state! Please make sure it is OFF! - it is off

Schedule & Plans:

 

  • A short access after this fill (request by sPHENIX), physics for the rest of the day

  • Tomorrow afternoon - 6 hours cosmic request by sPHENIX

Saturday, July 27, 2024

Status & Issues:

  • TPC: 

    • ITPC S11:2, masked out

    • TPX S19:3, power-cycled; Shift Crew should look for spikes in rdoNobytes, and if spikes look for appropriate sector adc plots—details in the TPC reference plots and issue problem-solving slides.

    • TPX S01:6 (#25208024), power-cycled

    • iTPC S21:2 (#25208045, #25208046), power-cycled

    • ITPC S16:4 (#25308048), power-cycled

    • (#25208050 - #25208053) ITPC S17:1, S04:1, S16:4, power-cycled

    • (#25208057) a spike in RDO_bytes plot TPX S11:4, power-cycled

    • (#25209003) ITPC S16:4, DOs power-cycled it

    • (#25209005) ITPC S07:1, DOs power-cycled it

    • (#25209007) ITPC S17:4, DOs power-cycled it

    • (#25209016) ITPC S04:1, DOs power-cycled it

    • (#25209019) ITPC S16:6, S16:3, DOs power-cycled them

  • Environment alarm:

    • Had a temperature alarm again (13:30), followed by a series of similar alarms for different subsystems on July 22. Called MCR and Jameela. The CAS watch and AC people came and fixed the problem (~15:14). Jameela scheduled an AC maintenance on the next maintenance day.

Schedule & Plans:

  • physics for all-day

 


Friday, July 26, 2024

Status & Issues:

  • TPC: 

    • TPX: RDO S21:6, power-cycled

    • iTPC S02:1 power-cycled, still create problem, masked out  

    • TPX[28] [0xBA1C] died/rebooted -- restart a new run and it looks good

    • 25207049-25207052: ITPC: RDO S18:4 , many auto-recoverys, again in the late night (25207059), power-cycled the it

    • (22:48) TPC Anode sector-1 channel-5 tripped, shifters tried to clear the trip it didn't worked. So, individually cleared the trip following the manual.

    • ITPC: RDO S11:2 -- auto-recovery failed. Powercyle this RDO manually & restart run. (25208018, 25208019 )

  • FCS:

    • fcs10 issues:  It gets stuck in fcs10 HCAL South FEE scan, Tonko increased the logging level to capture it in the log for the next occurence

New guide for FCS If the blinking issue happen again, try follows:

1) Powercycle FCS HCAL South FEEs in the FCS slow control.

2) "Reboot" FCS in the run control

3) Start a new run tun

4) If that failed, mask out the FCS[10] and record that in the shift log

  • TOF:

    • (#25208020)  Several TOF trays in error and do the auto-recovery. got the a red alarm from TOF LV THUBNE at same time. After the auto-recovery done the red alarm disappeared.

The list of TOF tray in error:

TOF: tray # in error: 66 68 69 70 71 72 73 74 75 76 77 79 80 81 82 83 84 85 86 87 89 90 91 92 93 94 95 122 -- auto-recovery, wait 6 seconds…

  • (#25208022)TOF THUB NE auto-recovery and triggered the red alarm. Alarm disappeared after the auto-recovery finished


Schedule & Plans:

 

 

  • physics for all day and weekends

  • Cosmic in next Monday (likely) requested by sPHENIX, Carl & Xiaoxuan & JH will work on the triggers during that time

  • Works plans on the list: AC. FCS S 10, VME 62, BBC East 1 & bwoo6, the fan of TOF(east vpd); reboot scserv (Wayne)

Thursday, July 25, 2024

Beam until around 15:30 (extended since 7:00); We had a short access to fix bTOW problem after beam dump; APEX until midnight; running physics until this morning.

Status & Issues:

  • TPC: 

    • (25206021 & 022) iS02:1, masked out; tpc.C:#621 RDO4: automatically masking FEE #7 error

  • Laser: 

    • Jim showed shifters about how to operate Laser

    • Checked the magic crystals for the TPC lasers. The quantity of crystals is good and should last several more days.

    • Alexei and Jim decided to increase the amount of methane flowing to the TPC (slightly) to try to increase the drift velocity. (It has been falling in recent days). So I turned FM2 clockwise by 3mm at the end of the index needle.

  • TOF gas: DOs switched from TOF Freon Line B to Line A

  • BTOW: Oleg and Yu made an access, replaced blown fuses for crate 0x0b it is configuring OK. Powercycled PMT box 39 (on separate power supply) and restore communications with boxes 41,42 and 39. BTOW sysetm restored and ready to go.

  • FCS: DEP10:6 in unmasked at 22:30 during fcs_led_tcd_only; but create problem when try to start the emc-check at the beginning of the fill (1:04). Tried try reboot trg and fcs, doesnt’t work; tried to only mask the 10:6, doesn’t work; masked the 10; - Tonko will look at it

  • Run control:  Run control was frozen this morning right before the beam dump, couldn't close the windows at the beginning. Force it to close with the windows task manager, but couldn't bring it back after several try. Called Jeff, found vcx-server was not running in the background.  Run control is back after rebooted the vcx-server (xlaunch). Since it happened in the end of the fill when the beam is about to dump, the problem didn't affect any physics run.  - shifters can use the old shitcrew PC (in front of shift leader desk, RTS02) to start the run control if this happens and stop us to start/stop a physics run in the future

  • Network: 

    • Any new host attempting (e.g. yesterday rebooted sc3) to connect to scserv initially fails in the same way. Wayne want to reboot scserv to see if it changes anything, but want to hold off until a maintenance period.

    • Temporatory solusion: if this issue is encountered again, please wait two minutes and try connecting again. 

  • Others: 

    • #25207018: 

      • 06:03:03 1 tcd CRITICAL tExcTask mvmeIrqLib.c:#477 UNKNOWN EXCEPTION: Task 0x01DFE148 suspended, exception 0x00000400.

      • 06:03:03 1 tcd CRITICAL tNetTask mvmeIrqLib.c:#477 UNKNOWN EXCEPTION: Task 0x01DEDA70 suspended, exception 0x00000700.

    • #25207019: EPD West hit count shows two (relatively) not-very-efficient areas. Issues disappeared in the next urn;


Schedule & Plans:

 

  • Machine development is cancelled, so physics for all day

  • sPHENIX is addressing the suggestions got from the safety walkthrough for the isobutane, no clear schedule yet; Carl and JH will try to test the low luminosity trigger configurations on Friday morning (Carl & JH), Carl will send a guide to summarize trigger configuration exam did last time

  • Works plans on the list: AC. FCS S 10, VME 62, BBC East 1 & bwoo6, the fan of TOF(east vpd); reboot scserv (Wayne)


Wednesday, July 24, 2024

Status & Issues:

  • SC3: 

    • lost control on VPD,BBC,ZDC and VME crates due to sc3 CPU crash. David bought control of VPD/BBC/ZDC back at SC5; Wayne came and rebooted SC3

  • BTOW: 

    • configuration failed error around 20:50; Tried restarting the run, but the caution persists. Then realized this might due to the crash of sc3 

    • Oleg T. found three BEMC PMT boxes (39, 41, 42) are dead, and they are masked out for now. 

    • Error at 05:21:09:  1 btow CAUTION btowMain emc.C:#467 BTOW: Errors in Crate IDs: 11;BTOW: configuration failed 

    • At similar time, VME-9 emcvme9_i4val made a red temperature alarm. (5:43), Oleg suspects that the issue with the BTOW is due to the blown fuse.  

    • Also have a problem on connecting to VME processes on the platform, for BTOW data collector and BTOW canbus, 

    • An access is requested after this fill for Oleg, and Wayne (if needed)

    • now running without BTOW+BSMD

  • GMT:  trip at u3 channel. DOs performed a reset trip operation.

  • Trigger: Hank points out the document about how to fix the trigger related problem for shifters (https://www.star.bnl.gov/public/trg/trouble)

  • FCS: DEP10:6 failed again, masked from the component tree. To the shifters:

    • If it is DEP 10:6 problem, masked 10:6 and run (already masked)

    • If it is entire DEP 10 problem, take FCS out from the run, contact Tonko

  • Others:

    • STAR control room door handle is fixed

    • An “umbrella” is installed to temporary fix the ceiling leaks

    • J.H. opened a BERT window for the beam-beam parameter. Now we can check the beam-beam parameter by it. 

Schedule & Plans:

  • APEX for today (July 24) 8:00 - 00:00, - problem on AGS RF cooling water, the beam extended 

  • Machine development assigned for tomorrow (July 25) 11:00-15:00

  • Still no clear timeline about when sPHENIX will flow the isobutane / have access / low luminosity runs - exam the trigger configurations on Friday morning (Carl & JH), Cail will send an guide to summarize trigger configuration exam did last time

  • Works plans on the list: AC. FCS S 10, VME 62, BBC East 1 & bwoo6, the fan of TOF(east vpd) 

 


Tuesday, July 23, 2024

  • unexpected beam abort (~ 20:06)

  • MCR had a fake ODH alarm, but based on the safety procedure, still dump the beam earlier (~06:20)

Status & Issues:

  • TPC: 

    • TPX: RDO S09:5, recovered after start a new run

    • #25204040, The TPC went 99% dead, this indicates it is external to TPC.( By doing a reply of daq you will see that at 12:27 the JP1 SCA triger rate goes to 3 mHz)

    • #25204053, many ITPC RDO auto-recoveries and 100% TPX/iTPC dead time

    • RDO4: automatically masking FEE #7 

    • power-cycled TPX: RDO S15:6

  • EPD: Mariia Stefaniak tried to fix the EPD problem,  reboot TRG and DAQ and take some pedestal_rhicclock_clean runs 

  • sTGC: Before #25205016, shifters restarted the sTGC LV and found some empty lines in the sTGC hits/FOB and empty space in hits/Qudrant. Power cycled after this run, and thing back to normal in the next run.

  • EEMC: 

    • (Day shift) red+blue indicator for sector 1, P1 (most inner circle) at the EEMC MAPMT FEE GUI. DOs followed the manual and solved the problem

    • a new noisy PMT in ESMD starting from Run# 25204041

  • Tigger: 

    • (at 9:45am): 1) STP reset is failing. Runs will not work, please power cycle L0/L1 crate #62 2) STP reset finally worked. Do not power cycle L0/L1 crate

    • L0 and L1 got stuck on TRG + FCS, shifters rebooted all component; still fail to start the run, FCS keep blinking; called Jeff; take the fcs[10] out; run could start. But today’s morning it is working again

    • #25204066: There was a warning in daq monitor for L2. Event timed out by 97 ms, Token 861, Total Timeouts = 11, Timed Out Nodes = MIX_DSM2::dsm2-23. - Will be discussed in the trigger meeting

    • #25204068: BQ_QTD[trg] [0x801E] died/rebooted -- try restarting the Run. Shifters tried rebooting trigger, didn't work. Then rebooted all, run could be started.

  • Others:

    • Takafumi brings it up that the reference QA plot is out of date (https://drupal.star.bnl.gov/STAR/content/reference-plots-and-instructions-shift-crew-current-official-version), will add a list of recent good run as an example of additional reference

    • The control room AC is still leaking, Jamella came and said will try to fix it ASAP

    • The door handle (white door) to enter STAR control room is loose - call MCR and maintenence team

Schedule & Plans:

  • Physics for the rest of the day with 6 hours of fills 

  • Possible chance to access after the sPHENIX isobutane safety walk-through (start at 11:00) in the afternoon. Works planed last time: AC. FCS S 10, VME 62, scaler board 5 (BBC E) & bwoo6 (Chris Perkins), the TOF east vpd -  we decide to wait for the next access

 


Monday, July 22, 2024

Status & Issues:

  • TPC: power cycled TPX: RDO S02:3; RDO iS19:1 bad (#25203050, and a few runs after  25203051), powercycled this RDO, but did not work, masked it out

  • #25203031 & 25203044 - The shift crew noticed in the QA plots that RDO_bytes have a spike around 75 (TPX Total bytes per RDO), - may related to the dead time

  • FCS: Tim came and had access around 2 pm; Tonko with Tim checked the fPRE DAQ link for sector 10:6, the DEP board#13(from 1) in crate #3 (from 0). The issue remains after replacing the patch cable and SFP module...but in any case, from further evaluation, the issue seems to be w/ the DEP board. Time constraints for the access did not allow for enough time to replace + configure + test a new board in the system. DOs unmasked sector 10 RDO 6. Again not working at ~ 23:22, shifters took it out. But showing ok during the midnight shift.

  • EEMC: Will reconfigured MAPMT box 1P1 (a.k.a ESMD crate @71) at ~ 10:05. OTOH it very simply responded to a reconfigure so appears ok. Then it again tripped many times during the day. Still shows the errors in Crate IDs: 71. Follow the manual can clear the trip, I will notice all the shifters about clear this trip manually. 

  • VME: VME62 got stuck. DOs reset it (14:56)

  • Environment alarm: 

    • TOF LV -> (East vpd) Terminal voltage triggering the yellow alarm from time to time over from ~16:00

    • Wide Angle Hall temperature is 30.7 degrees at 17:36 (yellow alarm); raise to 31.1 at 18:29 (red alarm). VME Crate 55 (no in-used) temperature yellow alarm at ~ 19:00; sTGC LV ROB# 10 current alarm at ~17:16 (yellow); VME Crate 51 PS Air temperature transient yellow alarm at ~19:26. Called MCR, and they sent CAS Watch to STAR to have a look. Looks like the AC in the IR is not running, so the 2nd platform shows a high-temperature alarm, but the original diagnosis is they need to have access to fix it. Since the temperature is still ok to run, we scheduled access for the CAS watch and AC guys to come, investigate, and fix it at the end of the fill (midnight). Then they found both ACs for WAH were down. They have successfully turned on one AC, and the temperature has started to decrease. Since the temperature is gradually back to normal, and running ok now, we will keep running until the next access

    •  
  • Trigger: 

    • #25203026, “The run got stopped due to: L2[trg] [0x8201] died/rebooted -- try restarting the Run”, could not start a new run for 2 times, reboot the trigger and everything was running again

    • Jeff updated the low rate prescale setting for fcsDiJPAsy*EPDveto - Good so far

    • Hank power-cycled scaler board 5. Tim checked the patch cable for the BBC E PMT signal. The cable is connected and visually seems fine. But still no response. We will need to check further at scheduled maintenance access.

  • Others: 

    • water leak at STAR control room, which seems to be from a bad sealing of the AC, AC team got informed and ordered new parts to fix the problem; 

    • Waters outside STAR assembly hall, maintenance team got informed. They shut the water down. 

    • If similar thing happened, called MCR first (and/or Jameela), and then maybe water group x4668

  • BERT: the system freeze time to time so the notice doesn’t pop up, keep an eye on the BERT system, restart it if needed 

Schedule & Plans:

  • Physics for the rest of the day with 6 hours of fills 

  • We are now running with one AC on in the IR, looks fine so far, will try to schedule a work once there is a chance to have a long time access. So for the next access: AC. FCS S 10, VME 62, scaler board 5 (BBC E), the TOF east vpd

 


Sunday, July 21, 2024

It was quite a smooth day for our data-taking.

Status & Issues:

  • TPC: #25202047 stopped due to TPC dead time (TPX: RDO S18:3 -- auto-recovery) 

  • Laser: DO and shift crew should check both drift velocity and charge distribution vs phi plot. The latter should show spikes at about the sector centers.  Two examples are printed and left near the laser PC and shift leader's desk. 

  • ETOF usually stocks about 3-5 minutes after the beginning of the run with the errors: ETOF has 1136>1000 EVB errors. It keeps happening. We are currently running without ETOF. Do we want to include it?

  • FCS: FCS10 is ready to go after Tonko power-cycled the DEP crate. The DEP10:6 remains masked. -  request an access for 30 mins

  • ESMD warning: "ESMD: Errors in Crate IDs: 71 -- refer to Endcap Operations Guide, Data Transfer Issues", run with this warning error for the rest of the shift

  • EPD: Run 25202062 - The shift crew observed a new cold tile in the EPD West <ADC> plot.

  • Trigger: Hank noticed the BBC East Scaler board 5 has problem

  • Others: Ceiling leaks at STAR control room (at the top of the coffee maker table), called site maintenance, they are sending people here; another leaking is found in the assembling hall (in front of the gas room), called site maintenance

Schedule & Plans:

  • Physics for the rest of the day with 6 hours of fills (Significant more down time now, need to discuss if longer fill is ok in tomorrow’s meeting)

 

Saturday, July 20, 2024

Status & Issues:

  • TPC:

    • TPX S10:6 was masked out for #25201034, power-cycled, problem was fixed.  

    • iTPC S13:3 was bad, restarting the run to fix the problem.

    • TPX S09:3; S23:4 bad, power-cycled it manually

    • iTPC S05:3 is masked out

  • BSMD: RDO 2 -- too many auto-recoveries stopped the run, Oleg looked at it, and it’s back now.

  • GMT: single-tripped HV module (u3). DOs manual to clear the trip by resetting and restoring the channel (section 2).

  • FCS: Yesterday morning DEP10:6 failed frequently in the early morning. Tonko looked at it and found many possible reasons (fiber optics interface is glitching, low voltage at the PS (unlikely), the fiber has been slightly dislodged, or some other board failure), but all need to have access. Tim found the location of the board (South: crate:#3, DEP board #13 (count from 0)), but we are not able to have access. Then FCS stopped the run to start a new run at around midnight, called Jeff, and tried to mask out 10:1, 10:6, or 10:8 but still couldn’t start the run. [fcs10 00:36:01 202] (fcsMain): ERROR: fcs_fee_c.C [line 1548]: S10:1 -- FEE scan failed [2]. Masked the whole sector (10) out. FCS->Pres->PresSouth is empty. Tonko looked at it this morning and fixed the problem in sector 10. We take a fcs_led_tcd_only run, looks ok so far. DEP10:6 still could have problem, mask it if it happened.

  • Trigger: Jeff: Changed prescales for some FCS triggers to increase rates of low threshold triggers when the luminosity is low, according to Carl's triggerboard suggestions.

  • The Windows machine to monitor the magnet is back online now.

  • #25201048: run stopped by: 3514|esbTask.C|Recovery failed for RDO(s): 1 -- stopping run. Try restarting. Fixed after restart. Not sure what’s the problem

Schedule & Plans:

  • Physics for the rest of the day with 6 hours fills


Friday, July 19, 2024

Beam quality improved after machine development

Fill 34826: Physics for sPHENIX started at 20:23; Physics for STAR started at 21:19. Production run started at 21:25 with ZDC rate ~ 20k

Fill 34829: Physics for sPHENIX started at 00:30; Physics for STAR started at 1:17. Production run started at 01:24, with ZDC rate at 22.4k

Status & Issues:

  • TPC:

    • Unmasked iTPC RDOs: iS08-1; iS09-4; iS10-3; iS11-3; iS13-1

    • Have problems again after replacement. Masked: TPX S11-3; S11-6; S20-4; S20-5 

    • TPX[30] [0xBA1E] died/rebooted (#25201011 ) - reboot seems not working, but then come back by itself

    • TPX and ITPC are 100% dead due to ITPC S02:4; S18:4; S04:1 (#25200043); then ITPC S02:4 S02:2 (#25200044); ITPC RDO S10:3 (#25200051-cosmic); ITPC RDO S10:3 (failed multiple times, masket out) iTPC RDO S08-1 (#25201006, failed multiple times, masked out) 

    • TPC Anode Trip (sector-23 channel 5)

  • TOF: TOF LV alarm (yellow) - power cycled TOF LV - cleared.

  • FCS: DEP10:6 failed - 4 times - Looks like the fiber optics interface is glitching. Tonko: Could be due to low voltage at the PS (unlikely) or the fiber has been slightly dislodged. Or some other board failure. - Need access?

  • Crate #84 on the 1st floor is yellow. Hear no alarm. The temperature of PS is about 46 and red status. The Fan Speed is 1200 and yellow status (evening shift)

  • BBC: Tim and Akio made access to fix the BBC scaler. It was a BBC-west discriminator which had offset from 0 in output. This was moved to working channel one below in the same module, and the output width was adjusted to 10nsec to match the old one. Now it's coming at a reasonable rate for noise & pocket pulser.

  • Windows: 

    • Shift leader computer crashed at 00:53 and 1:30, rebooted. TPC caen-anode HV alarming during the second crash (25201005). DOs brought them back following the instructions by click "wake me up". Not be able to stop the run. After the Run control was back, this run already run over 15mins. The QA plots looks okay, so still mark this run as good. - the run control can run on any of the linux machine

    • The machine to monitor the magnet is not recovered yet

Schedule & Plans:

  • Physics for the rest of the day with 6 hours fill


Thursday, July 18, 2024

  • Completed the scheduled access work during yesterday’s access: Network switch power supply (UPS), BSMD, ESMD crate 71, Magnet water for magnet, TPX, FST coolant refill, Powercycle main canbus

  • One fill so far since yesterday’s maintenance, 40 mins after sPHENIX declared physics, we start with the STAR ZDC coincidence rate ~ 22kHz

Status & Issues:

  • TPC: 

    • RDO: power cycled RDO S02:2 and S02:4, also power cycled iS08:1 for 3 times, still frequently stopped the run, masked out

    • Anode trip once in the morning (sector-23, channel 5).

  • FST: FST -> HV -> ROD 3 and 4 in red, shifters brought them back manually.

  • The total daq rate > 5K and scalar rate were high in red (9M) for JP and BHT triggers (25200008-25200013, 25200020-25200025). DO originally thought it is a trigger problem, so called Jeff. Jeff mentioned it could be a problem due to the triggered detector. Shifters do not see any problem from QA plots. Tonko and Oleg called in, pointed out it is DSM crate problem (L0-L1). Shifters power-cycle the BC1, BCE, and BCW (VME 72, 73, 76). The rate looked reasonable now. 

  • FCS triggers scalar rate is high > 9M (25200029), recovered in the next run

  • TOF gas is alarming for PT-2, changed the bottle

To shifters:

  • New expert call list is updated, contact Prashanth or/and Jim Tomas if there is any TPC related problems 

  • Record to the log if the run is stopped due to the "TPC 100% dead" issues 

  • If experts hang out the phone when you call in the mid-night, leave a message to experts, no need to call multiple times. Experts are getting to solve the problem as soon as possible after they received the messages

Schedule & Plans:

  • 30 mins access requested by sPHENIX, possible to request another longer access after machine development - we used this time in the morning to access and try to fix the BBC problem, power-cycled the crate but it seems not working  (Jamie & Akio). We will need to have a longer access if possible after machine development time

  • Machine development: 1000-1400 (put detector to APEX mode) - Toko will work on TPC during the APEX; request access after this if possible for BBC (Akio & Tim); 

  • Physics:  1400+


Wednesday, July 17, 2024

 

Status & Issues:

  • General: Beam dump around 7:30, magnet is down, having access now

  • TPC: S17:3 tripped; RDO iS17:4 bad; iS09:4 bad error, power cycled S09-4 and S17-4; masked out  iS09:4 in the end

  • TOF: PT-1 gas alarm, switched from B to A

  • ETOF: eTOF DAQ reconfiguration procedure is not working, "ETOF configuring front end, be patient!" for hours after restarting the eTOF DAQ. Geary called in and fixed the problem for the next run. Then it has >1000 EVB error again 

  • FCS: Akio uploaded new FCS Ecal HV file

  • STGC: a yellow gas alarm for the Pentane Counter at 12:39; bottles refiled by 14:51

  • L4: L4 live events display has been updated to include the global tracks back. The space charge parameters for L4 have also been updated. Now it support the users to select global tracks or primary tracks themselves in UI..

  • Trigger: 

    • Run 25199011 - By the end of this run, the rate increased to 4K, JP1 is 2.5K.

    • Cannot start run as trg-L0 get stuck, reboot trigger once

  • Others: 

    • unexpected beam loss ~ 2:54 yesterday and then 16:30 yesterday - request extra polarization measurement in the middle? - get statistics about unexpected beam loss

    • 40 mins delay after turn sPHENIX physics for the last fill, miscommunications MCR. We will keep 0 min or 40 min. Will be discussed during the spokersperson meeting 

    • PC with BERT got frozen for about 5 minutes (day shift)

    • AC in the control room is back - don’t touch the thermostat, contact Jameela if needed

    • To shifters: write the shift log on time, and write the summary log with more details on the problems

Access plan for today (to 16:00):

  • Network switch power supply (UPS) - Wayne

  • BSMD (with magnet off) - Oleg

  • ESMD crate 71

  • Magnet water for magnet - Prashanth

  • TPX - Tonko & Tim

  • Laser tuning - Alexei

  • FST coolant refill - Prithwish & Yu

  • Powercycle main canbus - David

  • Tour to students-  11:30 & 13:20 by Jeff & Prashanth & Yu

Schedule & Plans:

  • sPHENIX will request a few hours cosmic and some fill with less bunch for low luminosity after changing to a new mixed gas: use this time for STAR to tune our trigger? (configuration changes should be discussed/finalized with more advance time due to EPIC collaboration meeting next week)

  • During nominal daytime hours (0800-2000) CAD will operate with 4-hour stores after STAR is brought into collisions. Polarization measurements will be taken at 0 and 4 hours (skipping the 2/3 hours measurement). Outside of daytime hours, resume the nominal 6-hour store length after bringing STAR into collisions and follow the existing store recipes (i.e. polarization measurements every 3 hours and dump) - will be revisit after get the statistics on how often is the unexpected beam loss

Tuesday, July 16, 2024

Status & Issues

• EQ1_QTD died/rebooted in run 2597030

• FCS power-cycled between fills (Oleg T.)

• Jeff updated some triggers after the trigger board meeting (FCS DiJP/DiJPAsy and EM0/1 with EPD veto); starting with run 25197047

• ETOW configuration error (crate 1 fixed by DO, crate 2 later required intervention by W. Jacobs)

• GMT u3 tripped and recovered by DO

• Beam dumped for SPHENIX access (EMCal); next fill lost due to QLI

• L0 stuck, rebooted (x2)

• iTPC/TPX 100% dead in three runs

• Filled dumped just after 9 am for another SPHENIX access to fix EMCal problems

• Issues with l2ped web page persist; the plots are all available but the archive is not updated properly which causes index.html to stop on July 3; (l2btowCal has similar problem, but stops on July 7)

Schedule & Plans

• Maintenance day, Wednesday 0800-1600

  o Network switch power supply (UPS)

  o BSMD (magnet off)

  o ESMD crate 71

  o Magnet water for magnet (Prashanth will check if water group is ready for valve replacement)

  o iTPC/TPX recovery (Tonko, Tim)

  o Laser tuning (Alexei)

• Then back to physics

 

• SPHENIX will request 56 bunch fill for low luminosity in a few days; possibilities to use this for STAR? (configuration changes should be discussed/finalized with more advance time due to EPIC collaboration meeting next week)

 

Monday, July 15, 2024

Status & Issues

• Connection to VME was lost at start of fill in the morning; DAQ warning about crate #55 (pulser); resolved in consultation with David; power-cycled following the slow control manual; VME 50 was still yellow; power-cycled between runs, lost connection to gating grid recovered by David and cathode interlock

• Beam abort with anode trip about an hour before scheduled dump time

• Cosmics for a few hours; observed higher rates than before

• iTPC deadtime spikes in run 24196057; L1 invalid token at start of run

• iTPC RDO iS13-1 masked after unsuccessful attempts at power-cycling

• Other RDOs which required manual power-cycle: iS13-2 iS13-4

• iTPC/TPX 100% dead (in three runs)

• high rates in forward triggers in run 25197028; stopped quickly and started new run

• level 2 monitoring plots have not been updated on the web page; the analysis is producing output, but it is not updated on online.star.bnl.gov/l2algo/l2ped

Schedule & Plans

• SPHENIX is asking for a short access after the current fill

• SPHENIX rates at the start of fill are currently below the 24 kHZ which are the threshold to bring STAR on; detectors should be brought up when physics is On for SPHENIX

• Continue with physics until Wednesday morning (maintenance day)

 

Sunday, July 14, 2024

Status & Issues

• 30 minute access turned into closer to 2 hours; new fill after 3 hours

• BTOW configuration errors while trying to take pedestals; rebooted trigger

• Then L0 hangs; reboot trigger; power-cycled VME-62 (twice)

• ESMD errors in crate #71 at start of every run; Will was informed and we can ignore this for now (EEMC MAPMT boxes 1S3 and 1P1)

• DAQ message “requesting reconfigure from Run Control” in combination with power-cycling RDO S20-5 and “critical: RECONFIG ERR  tpx-34”; masked out S20-5; eventually able to start run after trigger rebooted

• Mostly smooth data taking through late afternoon and night; bgRemedyTest with 10k at start and end of each fill

• BTOW configuration failed in two more runs (not consecutive)

• EPD timing scan in runs 25195082 – 086

• sTGC hits/timebin has low counts early in fill 34799 (has happened before in some runs last week)

• L0 hangs one more time

• One run ends with 100% deadtime TPX & iTPC

Schedule & Plans

• Continue running physics until maintenance day (Wednesday)

• Include bgRemedyTest in fills as before (10k events)

• Discussion of beta* tomorrow

• Discussion of EPD timing cuts in trigger board meeting on Monday

 

Saturday, July 13, 2024

Status & Issues

• Akio power-cycled scaler crate; BBC And is back

• TPX RDO S11-3 and S11-6 are masked out due to power problems; Tim needs to take a look during maintenance day

• iTPC RDO iS09-3 investigation is continuing (added error messages for Tonko); mask again when it fails

• EPD veto on early hits is now in the production files (starting from run 25194034); shift crews have observed differences in EPD <TAC> (EPD expert suggested to reboot trigger and take pedestal_rhicclock_clean afterwards, this should have been added to the shiftlog)

• Stuck bit caused the high rates in EHT0; power-cycled TP-2 crate (Will J.)

• Rongrong tried to recover MTD BL 28; unsuccessful, still masked out

• Trigger group tested tier 1 file; everything back to default (?)

• Took some cosmics due to extended access/downtime

• Collisions at 1820

• Shift crew encountered: L0 died/rebooted, TPX[8] died/rebooted, iTP RDO iS10-3 power-cycle (repeatedly, then masked), iTPC[10] had to be power-cycled manually (Jeff)

• Power dip between fills at start of night shift with magnet trip, global interlock alarm, TPC FEE and RICH scalers white

• Magnet back up at 2:35 am

• Oleg T. recovered BEMC after MCW loss; HT TP 163 and 291 are masked out; BSMD is 50% dead and was turned off (until maintenance day)

• FST failure code 2 before first production run

• Combination of high rates in JP triggers and TPX/iTPC deadtime; rebooted trigger; power-cycled all RDOs; then again RDO S20-4/5 (again in the next run)

• Will J. recovered all MAPMT for ETOW; issue remaining MAPMT 1P1 (Will says it’s overheating, experts are aware)

• Run control was very slow in the morning, it seems to be running fine now

Schedule & Plans

• Continue physics data taking: pp200_production_radial

• bgRemedyTest_2024 at start and end fill

• EPD delay scan in next fill (non-intrusive during regular production run, see Hank’s email for details)

 

Friday, July 12, 2024

Status & Issues

• MTD HV trip in BL 15, 16 & 17 (early in fill 34785); power-cycled and back for next run

• Magnet trip at 10:30 am; strainers were cleaned during our downtime, but it is not completely clear where the problem is; valve replacement is ordered and should be replaced during maintenance; David Chan and team looked through temperature logs from different locations; magnet ramped up after 5 pm, temperatures looked fine and stabilized well under the trip threshold

• Network power switch died (splat-s60-2); Wayne was able to diagnose remotely; Jeff and Tim prepared access work; UPS was in “overheat error”; Tim plugged the network switch into the rack power

• MCR did a vernier scan for themselves while the magnet was down (and optimized our rate…?)

• Some problems coming back; Jeff, Rongrong, Gavin on zoom; one fill lost during ramp; everything was back for collisions at next fill

• MTD BL 28 is masked out

• FST problems with RDO 1-5 and 2-6; no problems when detector was at full HV

• BBC And is 0 in scaler GUI (Akio is looking at it)

• BTOW configuration failed in one run

• sTGC yellow/red gas alarm again this morning (Prashanth has been informed)

• elevated temperatures on VME-84 and 98 (EQ4, BDB)

• bgRemedyTest_2024; runs 25193…, 25194009, 017, 030

• ETOW HT trigger patch #81 is hot; EHT0 rate too high (prescaled at 50 now)

Schedule & Plans

• 1.5 hour access after this fill; dump time moved up to 10 am (condensation in tunnel, SPHENIX)

• STAR to get collisions at 24 kHz (SPHENIX)

• Carl’s bgRemedy studies confirm efficiency of background rejection for forward triggers; will send summary with configuration changes

• EPD delay scan (5 production runs) waiting for confirmation from Hank

 

Thursday, July 11, 2024

Status & Issues

• APEX study of spin direction at STAR was not successful and postponed

• Magnet trips at 12:17 pm and again at 1:26 pm; magnet at half field until 6pm, then back up to full field

• Collisions at 6:45 pm (75 minutes after SPHENIX)

• Several problems when starting run; BTOW configuration; TOF LV THUB NW tray 45, west 4 (power-cycled); iTPC RDO iS09-3 masked out

• Beam lost at start of second physics run

• East and west trim currents were not ramped up to full field; NMR showed 0.4965 T instead of 0.4988 T; ramped at 8:20 pm (mark the two runs as bad)

• Overnight fill with horrible yellow lifetime (tune changes during the ramp); STAR only 20 minutes behind SPHENIX but rates low from the start

• bgRemedyTest_2024 (runs 25192042, 25193016, did not include FCS)

• sTGC gas alarm (fluctuating, Prashanth was made aware)

• Tonko already looked at problematic RDOs from last night; iS01-1 reburned PROM; iS09-3 not clear what is wrong, unmasked again; iS09-4 disabled 4 FEEs

• STAR a little more than an hour behind SPHENIX

Schedule & Plans

• Physics until maintenance day (Wednesday, July 17)

• Vernier scan at the end of current fill (early afternoon)

• bgRemedyTest_2024 at start and end of two fills (Hank will double check tier 1 parameters and file/dates)

• Timing scan in regular runs on hold until after bgRemedyTest

 

Wednesday, July 10, 2024

Status & Issues

• TOF freon changed to bottle A

• epdTest-radial in new fill (run 25191030); cuts on early hits look good; in the process of being implemented -> bgRemedyTest_2024 is ready

• TPX RDO S01-5

• GMT u3 HV tripped (DO recovered, no further issues)

• L2 died/rebooted during configuration of one run; started new run without problem

• MTD low voltage THUBN alarm (run 25191041)

• iTPC RDO iS10-3 was masked out after repeated failure in pedAsPhys (waiting for collisions while SPHENIX was up already); masked out; Tonko reburned PROMs on iS06-2 and iS10-3 and unmasked them before APEX this morning

• Took cosmics data until APEX

• iTPC cluster occupancy in QA histogram is out of range early in the fill (e.g. compare runs 25191031 & 46)

Schedule & Plans

• Study of polarization vector during APEX today; take zdcPolarimetry runs when MCR does scan of different parameters (15-20 minutes x 2)

• Back to physics at 1600

• bgRemedyTest_2024 at start and end of fill

• trigger group requests five regular runs with modified settings (non-invasive to physics, details in Hank’s email)

 

Tuesday, July 9, 2024

Status & Issues

• iTPC RDO iS17-3 fixed and unmasked (Tonko)

• 3+ hours of cosmics data; first fill dumped after SPHENIX request for access (about one hour of collisions for SPHENIX)

• epdTest-radial with new TAC stop registers (run 25190055)

• ITPC RDO iS06-2 masked out after unsuccessful power-cycle

• Timeouts in l2ana01; low data rate (not sure if this is related, happened about 2 minutes apart)

• TPC anode trip S20-9

• TPX[24] died/rebooted

• iTPC RDO S02-4

• Fill extended due to problems with injection / BtA

Schedule & Plans

• bgRemedyTest_2024 updated after discussion in trigger board; ready for use once tier1 file is updated (Hank); take short run at start and end of each fill (TRG+DAQ+BEMC+EEMC+TOF+FCS)

• Wednesday APEX 0800-1600; continue physics until then

• Schedule a vernier scan in the near future (at the end of a fill)

 

Monday, July  8, 2024

Status & Issues

• GMT gas bottle replaced (reminder: even after switching to new bottle, the alarm keeps going until the empty bottle is replaced)

• TPX /iTPC RDOs: S11-6 (now masked out); iS02-4; power-cycled all after three failed attempts at run start

• TPX[31] died/rebooted during pedestal run

• Peak in TPC drift velocity is sometimes wide (run 25190013, improved in run 019)

• Magnet trip in fill 34764; restored without beam dump; polarization also looked ok in the next measurement

• Lost beam twice during injection / ramp

• Beam abort this morning; lead flows in sector 10 (problematic all week, being investigated now)

• EPD timing test looks good; background removed effectively (Hank)

Schedule & Plans

• Time between sPHENIX and STAR physics: over 100 minutes!

• Uptime 14 hours on Saturday; less than 9 hours on Sunday

• Short epdTest-radial in next fill

• Continue physics: pp200_production_radial until Wednesday morning (APEX)

 

Sunday, July 7, 2024

Status & Issues

• bgUpcTest with all detectors (25188041, 61, 68, 25189008)

• Lost laser view; no laser runs in fill 34758; Alexei got a short access between fills and restored the connection to laser platform

• STGC: ROB #03 bad FEB required power-cycle

• TPX[30] [0xBA1E] died/rebooted (running fine in the next run)

• Magnet trip at 6 pm; CAS were unable to clear the fault; clogged strainer for the supply; cleared by 6:55 pm when RHIC had just started injecting beam; ramped magnet and restarted RHIC fill

• TOF pt-2 alarm procedure updated (Alexei)

• GMT U3 HV tripped once

• Two peaks in “TPX Total bytes per RDO" (sectors 6 & 21); power-cycle cleared this

• sPHENIX had problems bringing down one of their detectors; unfortunately, MCR called us first while we were waiting for “ready to dump” from sPHENIX; ended up with 30 minute zdcPolarimetry

• Some issues with to many TOF recoveries; power-cycled LV; eventually had to go through CANbus restart procedure which solved the problem

• TPX RDO 17-5, 11-6, 11-3; iTPC RDO 02-4

Schedule & Plans

• Time between sPHENIX and STAR physics: 13, 8, 16, 32, 32, 59 minutes

• bgUpcTest is finished -> decision from trigger board (Monday)

• Continue physics: pp200_production_radial until Wednesday morning (APEX)

• Wayne is not available for next week (call Jeff for immediate help) 

 

Saturday, July 6, 2024

Status & Issues

• Machine development; first attempted fill around 4 pm; lost when STAR going into collisions
• TPX/iTPC maintenance (Tonko)
o TPX S11-6 reburned PROM (still problematic)
o iTPC iS7-2 this is an older bad one but I think I fixed it; unmasked.
o iTPC S9-3: masked FEE #6; unmasked RDO
• NMR is now recorded in database; still logged overnight but not necessary anymore
• Jeff updated expected rates for DAQ monitoring
• Took EPD test runs for trigger group (epdTest-radial, runs 25187037, 38, 39, 40)
• Test configuration for bgUpcTest; 2-3 minutes at start and end of fill; didn’t use all required detectors in recent fill; redo in next fill
• EEMC HV yellow alarm; had to reload HV values several times; some sectors went from red to black; ran without ETOW in the meantime
• Trigger 100% dead, couldn’t properly end the run; force stop and reboot trigger
• A few runs going into TPX/iTPC 100% dead after a few minutes this morning
• TPX RDO S11-6, S15-6; iTPC RDO iS12-3, iS17-3 (recurring, had to be masked)

Schedule & Plans

• Time from Physics:On to STAR collisions: 13 min, 8 min, 16 min
• Beams look stable after sPHENIX is in collision; we can start bringing up detectors after MCR call; rates should be monitored carefully until STAR gets to Physics:On
• bgUpcTest with all detectors (only need one good fill)
• Continue physics: pp200_production_radial until Wednesday morning (APEX)
 

Friday, July 5, 2024

Status & Issues

• At the start of fill 34747, problems starting with TPX[30] and failing STP resets. Run couldn’t be stopped properly and VME #62 power cycling wasn’t successful. Akio looked remotely, but also couldn’t help. Jeff eventually separated problems with trigger from TPX. pedAsPhys was successful on second try. Then hard reset of TPX[30] in the DAQ room. (Error in dsm2-3 in STP monitoring is not critical for data taking.)

This happened again when fill 34748 was lost. Shift crew tried to power-cycle VME #62; no success from control room or Jeff remotely. David got a short access, couldn’t power-cycle on the crate itself. Tim was not available, so we decided to hard reset (unplug). Fortunately, this solved the problem and VME #62 came back just as RHIC was about to reinject.

•  TPX, iTPC & FST deadtime issues in a few runs throughout the day. Clarified how to mark the runs and recovery with shift crew. (many auto-recoveries in early runs in new fill)

• sTGC pressure PT-1 yellow warning (fluctuating around threshold, may reappear during the daytime)

• FCS DEP 04:5 failed once (DAQ message has instructions for shift crew, no further issue)

• David changed the sTGC gas bottle

• Manual power-cycling of TPX RDOs 11:6 (many times), 22:6, 03:4, 14:6

Schedule & Plans

• Machine development today (~5 hours)

• Test run for trigger modifications; details will follow (Carl, Jeff)

• Continue physics: pp200_production_radial through the weekend

• Suggested to try to reproduce the VME #62 problems during next maintenance day for better diagnosis

 

Thursday, July 4, 2024

Status & Issues

• Lost beam before 10 am; then machine development

• BSMD shows high current at start of fill; Oleg T. said to run as is and power cycle later (~90 minutes)

• New cold channel in EPD (run 25185031)

• TPC 100% dead at start of one run (three other runs where it happened later; run 25185031 should not be marked as bad)

• Pentane refilled (David)

• NMR can be read from the control room now (David)

• Hank asked for repeat of epdTest_radial (run 25184043)

• “PCI Bus Error: status: 0x01” in emc-check and next run; reboot TRG + DAQ

• Few runs with TPX 100% deadtime after a few minutes; then L2 timeouts -> reboot all fixed it

• TPX RDO S11:3, S11:6; iTPC RDO S09:3 (many times this morning, now masked out)

• Beam abort around 5:20 am; beam permit dropped and couldn’t be cleared remotely; ran cosmics for a few hours; new fill coming up now

Schedule & Plans

• Machine development on Friday (about 5 hours)

• Continue physics: pp200_production_radial

 

Wednesday, July 3, 2024

Status & Issues

• BTOW crate 0x10 was recovered; trigger patches for this crate were un-masked and tested (Oleg T, Tim)

• MAPMT sectors 2&3 HVSys module replaced (Tim)

• TPX & iTPC maintenance done (S11-6 seemed ok, failed once during cosmics)

• Cosmics data throughout the afternoon

• TPC anode sector 23 channel 5 tripped; “clear trip” didn’t work; manual recovery

• Maintenance extended until 8 pm (request from sPHENIX); new fill up by 9 pm

• Jeff added log info for STP failure -> power cycle L0/L1 crate #62

• GMT HV gui wasn’t responding; DO power-cycled the crate following the manual

• Intermittent yellow alarm on sTGC PT2 & 3

• BEMC CANBus needed to be rebooted (white alarm on CANBus, VME-1, 12, 16, 20, 24, 27)

• epdTest runs (25184076, 077, 078 - all EPD detectors see the early hits now, detailed analysis is on-going)

• FST random noise (non-ZS) plots are empty (run 25185002)

• EEMC gui turned white after beam loss; two yellow warnings remain (VME-90, 97); expert was informed; ok to run for now

• ETOF was taken out of run control (Geary’s email); Norbert called around 2 am and said it should be ready again

• Power-cycle TPX RDO 20:6

Schedule & Plans

• Machine development from 10 am until 1 pm (or earlier); sPHENIX asked for 10 min. access

• Continue physics: pp200_production_radial 

• APEX Wednesday. July 10 (maybe later)

 

Tuesday, July 2. 2024

Status & Issues

Trigger configurations updated with TofMult0 after discussion in trigger board meeting; everything handled in the existing configuration files, no need to change the procedure for the shift crew (in effect from run 25183041, bgRemedyTest_2024 not needed for the time being)

iTPC RDO 02:4 manual recovery

FST deadtime 100% (Fleming suggested correlation with trigger rates at beginning of run, check run 25184013, mark runs as bad)

BSMD HV not ready for first run in fill 34733

TOF gas switched to line B (11:51 pm)

TPX/iTPC 90% dead for three attempted runs, eventually masked TPX RDO S9:4 (Tonko, done)

Maintenance day: beam dump at 8:10 am, magnet down

Schedule & Plans

Sweep at 5:30 pm

Wednesday: machine development (2nd storage ramp, 10 am, 3 hours)

Next APEX: July 10 (possibly postponed / combined with next session)

 

Monday, July 1. 2024

Status & Issues

• BSMD trips at beginning of fill

• TPC gating grid error and anode trips (first run in fill)

• Investigation of “non-super-critical pedestal problem in EQ4” (Maria, Mike); slightly shifted ADC spectra, does not affect the trigger at the moment, will communicate with trigger group if this changes

• evbx2 connection error? (run 25182073), L2 died/rebooted in the next run; all ok in 076

• evb01 | sfClient | Can't stat "/d/mergedFile/SMALLFILE_st_upc_25182078_raw_2400007.daq" [no such file or directory] (run 25178079)

• New alarm (buzz) for critical alarms in DAQ log (DAQ_announcer.sh, David)

• Beam loss at 1:25 am; regulator card on yo1.tq6 replaced at 5:25 am

• VPD alarm on slot 15-13; DO couldn’t recover; Akio looked remotely and said to ignore for now; slow control should take a look and maybe change limits (3 V out of 2 kV

• Took cosmics data for the rest of the night

Schedule & Plans

• Background discussion

  o Vertex study (special production with no vertex constraint, Ting’s fast offline analysis)

  o bgRemedyTest_2024: 25182039, 047, 061, 069, 25183021

  o Current bunch intensities are close to the loss limit at ramp (recent losses during rotator ramp)

• Test of separated collision setup for sPHENIX and STAR

• Continue physics: pp200_production_radial

• Maintenance day tomorrow (Tuesday, 9 am – 4:30 pm)

  o Magnet ramp down after beam dump

  o EEMC PS (Tim, Oleg)

  o TPC electronics

  o Laser (Alexei)

  o Windows update shift leader desktop

 

Sunday, June 30, 2024

Status & Issues

TPC RDO S11:6 remains masked

ETOW configuration failed in one run

L2 died in one run

Isobutane fraction ratio was higher than expected, followed the procedure for restoring the ratio (after 30 min. wait)

TPC, iTPC, FST hung a few runs on 100% deadtime; shift crew takes action within 2-3 minutes (when it doesn’t self-recover)

“FCS: powercycling DEP02:4” turns into “K?[0m” in DAQ monitor

TOF LV needed power cycling after too many errors (detector operators, tray 54 west 5 needed manual intervention)

BSMD had some trips early in fill, excluded for one run

Manual power-cycle on iTPC RDO iS13:3, TPX RDO S06:4

sTGC: ROB #04 bad FEB (followed procedure to start new run, power-cycled eventually)

Schedule & Plans

bgRemedyTest_2024: 25181040, 045, 059, 067, 25182019, 025

Special fast offline production for background studies is running (and progressing nicely) [fills 34714, 16 after the most recent modifications to the beam on Thursday]

Continue physics: pp200_production_radial

Good turnaround times for RHIC with current bunch intensities; sampled luminosity still a little below 50% of 2015

Windows update on shift leader desktop (maintenance day)

 

Saturday, June 29, 2024

Status & Issues

“sTGC hits / timebin” high early in fill (25180030)

TOF gas changed (PT-2), methane last night, isobutane this morning

bgRemedyTest_2024 run 25180057

FST 99% dead; start new run

TPX & iTPC 100% dead repeatedly and not recovering; power cycled RDO S05:6, S11:3 and S11:6 (twice); S11:6 failed again twice the new fill and masked out for now

BSMD had difficulty to ramp in new store; excluded for first few runs

Schedule & Plans

Jamie updated the ZDC coincidence cross section: 0.23 mb (down from 0.264 mb previous years)

Need gas bottle delivery; will run out in about 18 days (Alexei, Prashanth)

sPHENIX is slow in ramping down (polarimetry & beam dump); we may gain 5 minutes before ZdcPolarimetry at beam dump

 

Modifications to trigger configuration: get more data with bgRemedyTest_2024 with TRG+DAQ at beginning and end of fills throughout the weekend (takes about two minutes each with TOF, other detectors can ramp HV; bgRemedyTest before ZdcPolarimetry)

Friday, June 28, 2024

Status & Issues

EEMC sectors 2&3 (maintenance day…)

Follow-up on DSM board (Hank, may need an access)

L2/L0 problems between fills (Akio/Hank, power cycled VME-72 & VME-100)

Configuration errors in ESMD (emc-check and several runs after); sys-reset after call from Will

BBQ, EQ2, EQ3 failed during pedestal run(s), success on 4th attempt

sTGC ROB#3 power cycle

Masked RDO iS01:2 after it couldn’t be recovered

TPX RDO S11:6 power cycled manually

One call to Jeff when TPX & iTPC went 100% dead repeatedly; power cycled all FEE’s

Quality of laser events is often low, Alexei is following up with DO

Schedule & Plans

Physics collisions until Tuesday morning

Blue beam background studies (Ting looked at vertex distributions for abort gaps)

Ask Gene to have FastOffline without vertex cuts for a few runs from fill 34714

Modifications to TAC start to reduce early hits from background events (Hank)

Modified trigger configuration for early runs has been prepared (bgRemedyTest_2024); will try to run test at next fill (needs TOF HV up, can run while others are still ramping)

Updates to power-dip recovery work sheet (input from some subsystems still needed)

 

Thursday, June 27

Status & Issues

EPD trigger test runs done (25178033, 25178040)

7-9 minutes from Physics On to data taking

Cosmic data during Linac RF recovery in afternoon

Severe thunderstorm warning in the evening; thunderstorm eventually came through at 2:30 am; then power dip with magnet trip

MCW had a blown fuse; all VME crates were turned off (CAS watch & Prashanth, fixed at 7 am)

tpcAirHygroF alarm; Prashanth reset the TPC air blower

EEMC sectors 2&3 still tripping (looking for access opportunity)

LeCroy communication lost (DO->David), Akio reset it remotely

DSM board still causing trouble (Oleg -> Hank)

Schedule & Plans

No APEX today

Physics collisions: pp200_production_radial

FastOffline data for abort gap studies of beam background (Ting?); trigger proposal postponed (Carl & Akio)

Recovery procedure from power dip -> update detector check list

Next maintenance on Tuesday, July 2

Wednesday, June 26

Day of “Reflection on Safety” (Prashanth)

Status & Issues

Eleanor requested changes to trigger registers in epdTest-emc-check and epdTest-radial (now using default values)

Cogging adjustment: TPC vertex is -4 cm (BBC now at -10 cm ???)

ETOF fails twice in first few hours of fill, not included afterwards as per updated instructions

Glitch with BSMD HV GUI before pedestal run, restarted GUI (instructions updated)

New BERT feature: “Prepare / Ready for Pol. Meas.”

EEMC sectors 2&3 trip every few hours (Tim)

TPX RDO 11:6 has to be power cycled manually about once per shift

EVB11 is dead, taken out (3 am, Jeff)

FST running 99% dead during laser run (stop and restart)

Laser events are low, although bright spot on camera (Alexei)

Schedule & Plans

Physics: pp200_production_radial

Request for trigger test: epdTest-emc-check and epdTest-radial at the end of current fill (5 minutes each)

Slow increase in bunch intensity (now 1.5e11), yellow polarization lifetime

APEX still tentatively on Thursday (decision tonight, possibility for access in the morning 8 am, EEMC: Tim/Oleg)

Next maintenance on Tuesday, July 2

 

From 06/05/2024 to 06/24/2024, Period Coordinator: Jaroslav Adam, notes:

06/25/24

I. RHIC schedule

  a) Machine development today 10am till 1pm, then collisions also on Wednesday

  b) APEX on Thursday Jun 27

  c) Collisions on Friday Jun 28 and over the Weekend

  d) Next maintenance on Tuesday July 2

II. STAR status and issues

  a) Hot tower in BEMC eta-phi plot, trigger rates are normal, ignore it

  b) Intermittent alarms from sTGC ROB#10, current fluctuating at threshold,
     threshold to be moved

  c) Gas alarms (intermittent) on boxes on window to DAQ room to be reported as log entry

  d) TOF LV yellow alarms to be only reported as log entry (email by Geary
     "Log entry at 14:55 yesterday")

  e) Shift for z-vertex is seen for TPC L4, not for BBC (2 cm off), but similar shift
     seen at sPHENIX, fast offline to be checked

  f) Possibility for 3 hour access during APEX on Thursday Jun 27, during work hours
     if it happens, BEMC is ready

  g) No collision data yesterday, VPD west TAC looks same as for collisions,
     result of blue beam background

  h) Safety program tomorrow, 5 mins for safety during 10am meeting

  i) Next period coordinator is Oleg Eyser

III. Plans

  a) Radially polarized data taking at high luminosity, pp200_production_radial

06/24/24

I. RHIC schedule

  a) Store with 0 mrad crossing angle for sPHENIX at noon for 3+ hours,
     no collisions for STAR, consider this store as beam development

  b) Physics today again at 4+pm

II. STAR status and issues

  a) Smooth running yesterday

  b) z-vertex is shifted by about -11 cm, seen only for TPC vertex finder (space charge),
     +/-5 cm shift is ok, position to be checked with VPD

  c) sPHENIX will be asking for 4 hours without beam (TPC distortions) soon when RHIC is off
     for some other reason, opportunity for BTOW crate 0x10 and EEMC HVSys A controller

  d) Question on including eTOF later in the fill, crews observed more BUSY problems
     at the beginning of the fill, suggestion to try twice in each store to include eTOF back

III. Plans

  a) No data to be taken for store at noon today, the store is aimed for 0 mrad at sPHENIX

  b) Radially polarized data taking at high luminosity, pp200_production_radial, for store at 4+pm

06/23/24

I. RHIC schedule

  a) Physics today, adjustments for yellow polarization

  b) Test for zero crossing angle at sPHENIX tomorrow Jun 24

  c) Machine development on Tuesday Jun 25

II. STAR status and issues

  a) Smooth running yesterday

  b) Pentane bottle changed for sTGC, DOs rebooted EEMC controls

  c) Shifts in vertex z position are being corrected from RHIC side

  d) eTOF BUSY, procedure in eTOF manual from May 27 (in production run,
     no need to stop the run, take in out for the next run)

III. Plans

  a) Radially polarized data taking at high luminosity, pp200_production_radial

06/22/24

I. RHIC schedule

  a) Physics over the weekend

II. STAR status and issues

  a) Zero field, low luminosity store took place yesterday 10pm till 3:30 am

  b) VME crates were off from 1pm to 4pm, potential issue with MCW, reached 79F,
     several issues when turning on (BCW turned on after several tries, multimeters
     for field cage had to be power cycled during 5min access)

  c) Inform David before turning off VMEs due to temperature

  d) EEMC HV was restored with help of Will Jacobs and DOs configuring part
     of it manually

III. Plans

  a) Radially polarized data taking at high luminosity, pp200_production_radial

06/21/24

I. RHIC schedule

  a) Machine development today 11am till 3pm, physics after

  b) Physics over the weekend

II. STAR status and issues

  a) Opportunity for low luminosity, 56 bunches, zero field store after the development
     at 3+pm, 30kHz BBC rate was requested for the store, call Akio when we get
     the store; the store will be polarized

  b) BTOW crate 0x10 is still masked and disconnected, Tim dealing with one board
     from that crate in lab, then an access for several hours with magnet off
     would be needed

  c) EEMC problematic HVSYs A controller was replaced by a spare (Tim), spare
     did not work, original controller is in place now

  d) EPD crate 4 early hits, two new configurations (Eleanor+Jeff) to be tested
     with timing setup, to be run after emc_check during normal polarization run,
     email by Hank with details to be sent

  e) Lecroy1445 for BBC/VPD/ZDC, procedure to restore communication for DOs now works

III. Plans

  a) Radially polarized data taking at high luminosity, pp200_production_radial,
     after the potential low luminosity zero field store

06/20/24

I. RHIC schedule

  a) Maintenance now 8am till 6pm, physics after

  b) Friday Jun 21 machine development 9am till 1pm, then physics also over weekend

II. STAR status and issues

  a) Maintenance now, restricted access, work on west trim magnet (had multiple
     trips past days), TPC RDOs (Tonko), BTOW (Oleg + Prashanth + Tim)

  b) For NMR readout, wait for magnet ramp to finish before reporting to shift log
     (and green indicator 'NMR LOCK' to the left of field value should be lit for field
     to be valid), now hold readouts till 6pm

  c) Procedure for magnet ramp to update to instruct MCR to wait with ramping magnet
     back until STAR informs them is ready to do so

  d) Visit to STAR today afternoon

III. Plans

  a) Radially polarized data taking at high luminosity, pp200_production_radial

06/19/24

I. RHIC schedule

  a) Physics today, maintenance tomorrow, Thursday Jun 20

  b) Friday Jun 21 machine development 9am till 1pm, then physics also over weekend

II. STAR status and issues

  a) Magnet trip on all magnets after a power dip yesterday 6pm, CAS watch replaced
     regulator card for west trim (current was ~10A lower than set value)

  b) BCW crate #76 turned on only after several tries (was turning off itself after
     several seconds), Jeff tested trigger, ok now

  c) Lecroy1445 for BBC/VPD/ZDC lost communication, DOs could not recover because procedure
     involves ssh login to one of SC machines on platform which did not work - now crews
     should call David or me when it happens

  d) +/-5V oscillation on power line, CAD investigating its cause

  e) Current state of online QA plots to be checked by crews at shift change - TPC occupancy
     may change over time depending on RDO availability, similar holds for BTOW

  f) Maintenance tomorrow, Jun 20, work on west trim magnet (had multiple trips past days),
     TPC RDOs (Tonko), BTOW (Oleg + Prashanth + Tim)

  g) Visit to STAR tomorrow afternoon

III. Plans

  a) Radially polarized data taking at high luminosity, pp200_production_radial

06/18/24

I. RHIC schedule

  a) Machine development today 11am till 2pm, then physics

  b) Physics Wednesday, maintenance on Thursday Jun 20

  c) Friday Jun 21 machine development 9am till 1pm, then physics also over weekend

II. STAR status and issues

  a) Access yesterday for BTOW radstoneBoards and DSM1 board in BCW crate finished ok
     (DSM1 board was replaced in BCW crate and controller for BTOW crate #80 was replaced
     - radstone were ok)

  b) Maintenance on Thursday Jun 20, work on west trim magnet (had multiple trips past
     days), TPC RDOs (Tonko), BTOW (Oleg + Prashanth + Tim)

  c) Alignment data with field off when machine is in stable condition, by end of June

  d) FCS ECAL voltage file changed by Akio to compensate for radiation damage

  e) Online plots seen to fill slowly in the morning, Jeff working on automatic
     restarts for Jevp plots

III. Plans

  a) Radially polarized data taking at high luminosity, pp200_production_radial

06/17/24

I. RHIC schedule

  a) Physics today, intensity increase 0.1e11/store

  b) Next maintenance Thursday Jun 20 (not Wednesday because of holidays)

II. STAR status and issues

  a) BTOW/BSMD out of the runs, radstoneBoards in crate #80 can't initialize,
     access needed, end of current fill at 2:30

  b) JP2 triggers firing hot, taken out, access needed for BW003 DSM board
     (stuck bit)

  c) Jevp plots crashed two times, recovered by Jeff and Wayne, new instruction
     for shift crews to be provided

  d) Multiple magnet trips for west trim, instruction for shift crews to first
     put detectors to magnet ramp and then call CAS watch (they're very quick
     in ramping the trim back), item for maintenance on Thursday from CAS side;
     update instruction to call Prashanth in case of magnet trip

  e) Alignment data with field off, tbd at coordination meeting Tuesday

  f) NMR field inconsistent with readings on magnet current - variations in read
     current values

III. Plans

  a) Radially polarized data taking at high luminosity, pp200_production_radial

06/16/24

I. RHIC schedule

  a) PS issues at RHIC, attempt for polarized beams ended in unexpected abort
     at flattop at 4am

  b) Next maintenance Thursday Jun 20 (not Wednesday because of holidays)

II. STAR status and issues

  a) Magnet trips in west trim, 5 times

  b) Jevp plots and run control window crashed, recovered by Jeff,
     log at 17:52 yesterday

  c) Cosmics since 4am

  d) Alignment data with field off, tbd at coordination meeting Tuesday

III. Plans

  a) Radially polarized data taking at high luminosity, pp200_production_radial,
     after cold snake and PS issues are recovered

06/15/24

I. RHIC schedule

  a) Unpolarized stores, polarization after cold snake is recovered,
     expected later today

  b) Physics over the weekend

  c) Next maintenance Thursday Jun 20 (not Wednesday because of holidays)

II. STAR status and issues

  a) Multiple trips after unexpected beam abort around 1:30am today (MTD, BSMD,
     sTGC, EEMC, TPC), updating Detector Readiness Checklist to wait 5 minutes
     after 'physics' is declared for a store to start bringing detectors to physics,
     also no Flat Top state in Detector States

  b) BERT screen on SL desk not allowing to select STAR status in pull-down menu,
     still remains

III. Plans

  a) Radially polarized data taking at high luminosity, pp200_production_radial,
     after cold snake is recovered

06/14/24

I. RHIC schedule

  a) Unpolarized stores today, cold snake to be recovered by 8pm, polarization
     after

  b) Physics over the weekend

  c) Next maintenance Thursday Jun 20 (not Wednesday because of holidays)

II. STAR status and issues

  a) Access yesterday for DSM1 boards (noisy JP triggers) and MTD HV was ok,
     issues seem fixed

  b) New protected password, please login to drupal link and scroll to the bottom
     of the page

  c) BERT screen on SL desk not allowing to select STAR status in pull-down menu,
     also beam dump window does not appear

  d) Shift crews, please pay attention to AC water drain, was full now, and keep
     doors closed when the AC is running, DAQ room doors also to be closed
     at all times

III. Plans

  a) Radially polarized data taking at high luminosity, pp200_production_radial,
     after cold snake is recovered

06/13/24

I. RHIC schedule

  a) Access now at 10am for two hours, then machine development at noon till 4pm

  b) Polarized physics at 4pm (cold snake will be recovered early afternoon)

  c) Next maintenance Thursday Jun 20 (not Wednesday because of holidays)

II. STAR status and issues

  a) Noisy JP triggers, access now to replace 2 possible DSM1 boards

  b) MTD, power failure in CAEN PS crate, same access now to replace power module

  c) Trigger thresholds for B/EMC are changed (to account for lower gain in PMTs),
     email on star-ops, subject 'Changes to B/EMC threshold settings'

  d) Configuration 'pp200_production_radial' to be used for physics at 4pm again

III. Plans

  a) Radially polarized data taking at high luminosity, pp200_production_radial

06/12/24

I. RHIC schedule

  a) APEX today starting 10am, polarization measurement today at 9pm (when cold
     snake is restored), back to physics at midnight

  b) Thursday 6/13 till Sunday 6/16: physics

  c) Next maintenance Thursday Jun 20 (not Wednesday because of holidays)

II. STAR status and issues

  a) Noisy JP triggers, 2+ hour access to replace 2 possible DSM1 boards, might
     get such access tomorrow Thursday, after machine development ~2pm - to be updated

  b) MTD, power failure in CAEN PS crate, power module to be replaced, 1 hour access

  c) 7bit bunch Id, incorrect reset for counter, not happened since Monday morning

  d) Trigger thresholds for barrel, test run 25163054 done last night (to compensate
     for lower gains in PMTs), tba over star-ops by Carl

  e) Online plots crashing from time to time, Jeff investigating

III. Plans

  a) Radially polarized data taking at high luminosity, pp200_production_radial

06/11/24

I. RHIC schedule

  a) Physics today, last store will start at 10pm to last till APEX tomorrow at 10am,
     then polarization measurement tomorrow at 9pm (when cold snake is restored)

  b) Thursday 6/13 till Sunday 6/16: physics

II. STAR status and issues

  a) EPD missing sectors were caused by eq3_qtd and eq4_qtd nodes masked in run control,
     no clear reason why, eq4 lost first, eq3 in run after

  b) Noisy JP triggers, BC102, DSM#1, tbd at trigger meeting

  c) 7bit bunch Id, incorrect reset for counter (Akio), tbd at trigger meeting

  d) Thresholds for barrel triggers to be readjusted to compensate for aging effects,
     Carl will instruct SL on zoom for control room

III. Plans

  a) Radially polarized data taking at high luminosity, pp200_production_radial

06/10/24

I. RHIC schedule

  a) Physics today and tomorrow, APEX on Wednesday, Jun 12

II. STAR status and issues

  a) EPD has missing sectors, EQ3 and EQ4 not reading out, potential access
     at noon or after (sPHENIX asked for 2 hours)

  b) Noisy JP triggers, BC102, DSM#1, Hank looking into it

  c) 7bit bunch Id, incorrect reset for counter (Akio)

III. Plans

  a) Radially polarized data taking at high luminosity, pp200_production_radial

06/09/24

I. RHIC schedule

a) Physics now till Tuesday, Jun 11

II. STAR status and issues

  a) eTOF not in the runs, repeated 'scDeamon.C:#1904 ETOF has 1018>1000 EVB' message

  b) GUI for VME 70 (EEMC canbus) shows incorrect voltages, crate itself works ok

III. Plans

  a) Radially polarized data taking at high luminosity, pp200_production_radial

06/08/24

I. RHIC schedule

  a) Physics over the Weekend + Monday and Tuesday,
     Jun 8 till 11

II. STAR status and issues

  a) Recurrent trips for west trim magnet,
     CAS worked on it yesterday

  b) BSMD sector 2 and TPX[34] gave errors in pedestal run 25160019,
     due to oncoming injection crews couldn't run the pedestal again

  c) Transient TOF or MTD LV alarms can be ignored (not temperature),
     log entry for persistent alarms, in email to star-ops by Geary
     yesterday, subject 'TOF LV yellow alarms'

III. Plans

  a) Radially polarized data taking at high luminosity, pp200_production_radial

06/07/24

I. RHIC schedule

  a) today: store at 3am, machine development now till 11am (spin tune study for blue snake),
     then physics

  b) Weekend Jun 8,9: physics

II. STAR status and issues

  a) Wrong production configuration (pp200_production_High_Luminosity) was in Detector Readiness Checklist,
     (typo introduced yesterday when the checklist was updated), correct configuration is pp200_production_radial

  b) BSMD is included in production runs

  c) Shift crews should subscribe to star-ops mailing list, star-ops-l@lists.bnl.gov

III. Plans

  a) Radially polarized data taking at high luminosity, pp200_production_radial

06/06/24

I. RHIC schedule

  a) today: APEX was scheduled till 9pm, however no beam from AGS (Siemens exciter power supply),
     some APEX sessions will be rescheduled, back to physics at 9pm

  b) Friday Jun 7: spin tune study for blue snake (~ 2 hours) between stores

  c) Weekend Jun 8,9: collisions

II. STAR status and issues

  a) Maintenance completed yesterday

  b) Cosmics overnight because of no beam

  c) BSMD to be included, Oleg will give instruction

  d) For crews: very humid these days, please keep control room and DAQ room doors closed
     for AC to work properly. PS: also flush coffee water tray from time to time otherwise
     it spills over the table

III. Plans

  a) Radially polarized data taking at high luminosity, pp200_production_radial

06/05/24

I. RHIC schedule

  a) today: maintenance 8am till 4pm, then collisions

  b) Thursday Jun 6: apex from 8am till 9pm, investigation for longitudinal
     component, STAR will take ZDC polarimetry runs, then collisions by 10pm

  c) Friday Jun 7: spin tune study for blue snake (~ 2 hours) between stores

  d) Weekend Jun 8,9: collisions

II. STAR status and issues

  a) Smooth running

  b) Maintenance day today:

  c) Magnet to be turned off after morning beam dump for 200T chiller for magnet turn-on,
     then turn magnet on to test the chiller

  d) Turn off TPC FEEs + VMEs + TOF HV, LV + MTD LV HV due to work on condenser fan
     for the 80T chiller (cools MCW)

  e) EEMC MAPMT FEE box cooling (Bill S. and Prashanth), when magnet is off,
     barriers down and access to the (south) poletip

  f) TPX/iTPC RDOs masked out (3 of them), Tonko will work on it when FEEs are back on

  g) eTOF, colors on HV GUI -> only sector 3 is at full, all others are zero / re-open GUI
     may clear colors

  h) Crews should lookup reference plots, SL are passing information to those who asked

  i) EPD lower gain at 3 tiles (outer) Maria Stefaniak

  j) BSMD to be included tomorrow

III. Plans

  a) Radially polarized data taking at high luminosity, pp200_production_radial

From 05/15/2024 to 06/04/2024, Period Coordinator: Zilong Chang, notes

STAR daily operation meeting 05/14/2024

 (Period Coordinator: Zhangbu Xu)

Incoming Period Coordinator: Zilong Chang

 

RHIC Schedule

Plan for this week,

  • Maintenance day (Wednesday), APEX (Thursday), switch to Spin run  
  • Emittance is large, achieving 2.1x10^11 per bunch at injection and 1.9x10^11 at top (ultimate goal is 3x10^11); 
  • Yesterday: 28x28 and vernier scan for STAR Monday 6 hours (3-4PM); high-luminosity run 2 hours before hand; 
  • 56x56 sPHENIX not yet ready
  • STAR leveling at 20KHz

 

STAR status

  • Physics running at 7KHz. pp200-production-LowLuminosity
    MB-EPD+forward: 1.4B evts, prescale (7-10); MB-EPD-TOF4: 1.55B evts, prescale 1
    switch to leveling at 
    40KHz Monday and MB-EPD+forward all the way
  • Check prescale=1 for MB if we can have even higher DAQ rate (7—7.4KHz)
  • sTGC: 
    Wednesday access change one of the bad FEES;
    Zero-field Alignment dataset (56x56 
    third priority Monday)? 
  • Shift operation. Any issues?
  • Shift vacancy issue? Working progress (Cebra)
  • BBC QT in wrong bunch (prepost=-1) in some runs with some boards; production runs
  • High-Lumi configuration: 
    Configuration run taken (BBCA 500KHz); 
    zdcpolarimetry (front/end thresholds run15 setting), analysis shows consistency with previous runs; new gain file produced;
  • 28x28 fill 8 hours

Vernier Scans at beginning and end; forward cross section data; Smooth runs; 

 

Plans

  • a full Maintenance Day on May 15th (Linda Horton’s visit at BNL); May 16, APEX;


STAR daily operation meeting 05/13/2024

 

RHIC Schedule

Plan for this week,

  • 111x111 bunch collisions for experiments remaining of the week (mostly).
  • Emittance is large, doing shorter fills (2 hours) to effectively scrub;  
    achieving 2.1x10^11 per bunch at injection and 1.9x10^11 at top (ultimate goal is 3x10^11); 
  • 28x28 and vernier scan for STAR Monday 6 hours (3-4PM); high-luminosity run 2 hours before hand; 56x56 sPHENIX not yet ready
  • Two abort gaps blue and yellow background asymmetry 
  • STAR leveling at 40KHz

 

STAR status

  • Physics running at 6.2KHz. pp200-production-LowLuminosity
    MB-EPD+forward: 1.2B evts, prescale (7-10); MB-EPD-TOF4: 1.55B evts, prescale 1
    switch to leveling at 
    20KHz Monday and MB-EPD+forward all the way?
  • L4 died a couple of times, is this indicative of a hardware issue?
  • sTGC: 
    1mV/HV 2900V/Bit ON, updated at 11:46AM yesterday after beam dump; 
    Wednesday access change one of the bad FEES;
    Zero-field Alignment dataset (56x56 
    third priority Monday)? 
  • Shift operation. Any issues?
    shift leader desktop freezes from time to time (for 5 minutes), firefox issue (Wayne)? 
    Should be stand-down during thunderstorm request information from MCR; APEX mode; 
    AC unit above control roof. Permanent unit arrived, now only temporary unit (start to work). wiring done, work permit? 
  • Shift vacancy issue? Working progress (Cebra)
  • Scaler timing for Polarization monitoring. DONE;
    3 scaler boards left, #3 ZDC, #5,6 BBC; no backup
  • BBC QT in wrong bunch (prepost=-1) in some runs with some boards; production runs; 
  • High-Lumi configuration: 
    When we are done, inform CAD. They will switch to 28x28;
    BHT2 remove BBC TAC requirement; dimuon VPD TAC2 to VPD TAC (100cm); 
    UPC-J/Psi BHT bit bug fixed; 
    Jeff will take a couple of runs so that Carl and others can check the settings; 

Plans

  • Spin Trigger and polarimetry commissioning
    production trigger configuration, optimize trigger rates 
    polarimetry scalers; 
    Monday instead, removing cross angle and leveling at 20KHz, with high luminosity configuration file; scaler working; need to check ZDC polarimetry data and thresholds; ZDCSMD gain matching, >30 minutes zdcpolarimetry (high priority on Monday, 1 million evts beginning and end); afternoon-early evening Monday 28x28 bunches;  vernier scan configuration file does have scaler readout every second and in the run control (second priority Monday); 6 hours
  • a full Maintenance Day on May 15th (Linda Horton’s visit at BNL); May 16, APEX;
     

 

STAR daily operation meeting 05/12/2024

 

RHIC Schedule

Plan for this week,

  • 111x111 bunch collisions for experiments remaining of the week (mostly).
  • Emittance is large, doing shorter fills (2 hours) to effectively scrub;  
    achieving 1.5x10^11 per bunch (ultimate goal is 3x10^11); 
  • Beam since last evening; 2 hour store, high background, not the best for MB-EPD
  • Continue short store over weekend, 28x28 and vernier scan for STAR likely Monday
  • STAR leveling at 40KHz
  • No 9AM meeting on the weekend; 

 

STAR status

  • Physics running at 5.8KHz. pp200-production-LowLuminosity
    MB-EPD+forward: 1.1B evts, prescale (7-10); MB-EPD-TOF4: 1.3B evts, prescale 1
    switch to leveling at 20KHz Monday and MB-EPD+forward all the way?
  • Need to make sure efficient running; data-taking right after PHYSICS ON; 
    do not wait for leveling; 
    After luminosity leveling at BBCAnd~=40KHz is stable, stop and restart run. 
  • sTGC: 
    gain 3mV/HV 2850V/Bit ON, mask two FEEs; 
    1mV/HV 2900V/Bit ON, NO on run control? Wednesday access change one of the bad FEES; 11:46AM beam dump; 
    Zero-field Alignment dataset Monday (56x56 
    third priority Monday)? 
  • Shift operation. Any issues?
    shift leader desktop freezes from time to time (for 5 minutes), firefox issue (Wayne)? 
    Should be stand-down during thunderstorm request information from MCR; APEX mode; 
    AC unit above control roof. Permanent unit arrived, now only temporary unit (start to work?). wiring done, work permit? 
  • Shift vacancy issue? Working progress (Cebra)
  • Scaler timing for Polarization monitoring. DONE;
    3 scaler boards left, #3 ZDC, #5,6 BBC
  • BBC QT in wrong bunch (prepost=-1) in some runs with some boards; production runs; 

Plans

  • Spin Trigger and polarimetry commissioning
    production trigger configuration, optimize trigger rates 
    polarimetry scalers; 
    Monday instead, removing cross angle and leveling, with high luminosity configuration file; scaler working; need to check ZDC polarimetry data and thresholds; ZDCSMD gain matching, >30 minutes zdcpolarimetry (high priority on Monday); afternoon-early evening Monday 28x28 bunches;  vernier scan configuration file need to have scaler readout every second (second priority Monday); 
  • a full Maintenance Day on May 15th (Linda Horton’s visit at BNL); May 16, APEX;

 

STAR daily operation meeting 05/11/2024

 

RHIC Schedule

Plan for this week,

  • 111x111 bunch collisions for experiments remaining of the week (mostly).
  • Emittance is large, doing shorter fills (2 hours) to effectively scrub;  
    achieving 1.5x10^11 per bunch (ultimate goal is 3x10^11); 
  • Beam since last evening; 2 hour store, high background, not the best for MB-EPD
  • Continue short store over weekend, 28x28 and vernier scan for STAR likely Monday
  • STAR leveling at 40KHz
  • No 9AM meeting on the weekend; 

 

STAR status

  • Physics running at 5.8KHz. pp200-production-LowLuminosity
    MB-EPD+forward: 1.05B evts, prescale (7-10); MB-EPD-TOF4: 1.1B evts, prescale 1
  • Potential rate 8KHz, TPC hits the speed limit at 6KHz; 
    try TPC readout at 6KHz?
  • Overnight L0 power cycle; L4 down, Wayne/Diyu/Mike fixed it this morning;
  • Need to make sure efficient running; data-taking right after PHYSICS ON; 
    do not wait for leveling; 
    If DAQ rate <5KHz after luminosity leveling or BBCAnd~=40KHz, stop and restart run. 
  • clean up coffee station (dump trays);
  • VPD: DONE; VPD TAC2 windows; HT-VPD100; 
  • MTD timing window cuts; 
  • sTGC: 
    gain 3mV/HV 2850V/Bit ON, mask two FEEs; 

Zero-field Alignment dataset later on 

  • Shift operation. Any issues?
    cosmic ray without TPC yesterday;  
    Should be stand-down during thunderstorm request information from MCR; APEX mode; 
    AC unit above control roof. Permanent unit arrived, now only temporary unit (start to work?). wiring done, work permit? 
  • Shift vacancy issue? Working progress (Cebra)
  • Scaler timing for Polarization monitoring. DONE;
    3 scaler boards left, #3 ZDC, #5,6 BBC
  • BBC QT in wrong bunch (prepost=-1) in some runs with some boards; production runs; 

Plans

  • Spin Trigger and polarimetry commissioning
    production trigger configuration, optimize trigger rates 
    polarimetry scalers; 
    Monday instead, removing cross angle and leveling, with high luminosity configuration file; scaler working; need to check ZDC polarimetry data and thresholds; ZDCSMD gain matching; afternoon-early evening Monday 28x28 bunches;  
  • a full Maintenance Day on May 15th (Linda Horton’s visit at BNL) 




STAR daily operation meeting 05/10/2024

  

RHIC Schedule

Plan for this week,

  • 111x111 bunch collisions for experiments remaining of the week (mostly).
  • Emittance is large, doing shorter fills (2 hours) to effectively scrub;  
    achieving 1.5x10^11 per bunch (ultimate goal is 3x10^11); 
  • Beam since last evening; 2 hour store, high background, not the best for MB-EPD
  • Continue short store over weekend, 28x28 and vernier scan for STAR likely Monday
  • STAR leveling at 40KHz
  • No 9AM meeting on the weekend;
  • CAD PS access, no STAR access after this fill for 2 hours; after that, longer store 6 hours for sPHENIX TPC conditioning; after that, short fills again

 

STAR status

  • Physics running at 5.8KHz. pp200-production-LowLuminosity
    MB-EPD+forward: 980M evts, prescale (7-10); MB-EPD-TOF4: 950M evts, prescale 1
  • Potential rate 8KHz, TPC hits the speed limit at 6KHz; 
    try TPC readout at 6KHz?
  • VPD: production ID; active splitter checks, no issue discovered but need to find the source of issue (Tim); VPD west ADC (6 MXQ even chs) progressing; 
    raise thresholds to 80 (one channel masked out), east 50; noise persists with HV OFF; commissioning on tune configuration file;
  • sTGC: 
    gain 3mV/HV 2850V/Bit ON, mask two FEEs? Later today

Zero-field Alignment dataset later on 

  • Shift operation. Any issues? 
    Should be stand-down during thunderstorm request information from MCR; APEX mode; 
    AC unit above control roof. Permanent unit arrived, now only temporary unit (start to work?). wiring done, work permit? 
  • Shift vacancy issue? Working progress (Cebra)
  • Scaler timing for Polarization monitoring. It’s on the to-do list of Chris.
    3 scaler boards left, #3 ZDC, #5,6 BBC
  • BBC QT in wrong bunch (prepost=-1) in some runs with some boards; production runs; 
    A graph with lines drawn on it&#xA;&#xA;Description automatically generated

Plans

  • Spin Trigger and polarimetry commissioning
    production trigger configuration, optimize trigger rates 
    polarimetry scalers; 
    Monday instead, removing cross angle and leveling, with high luminosity configuration file; scaler working; need to check ZDC polarimetry data and thresholds; 
  • a full Maintenance Day on May 15th (Linda Horton’s visit at BNL) 

STAR daily operation meeting 05/09/2024

 

RHIC Schedule

Plan for this week,

  • 111x111 bunch collisions for experiments remaining of the week (mostly).
  • Emittance is large, doing shorter fills (2 hours) to effectively scrub
    during the day and overnight long fills; 
  • Blue injection kicker (arcing and oil blockage), AGS field unstable (Siemens PS)
    no beam last 24 hours
  • Thursday Machine Development (until 16:00); 4-8PM scrubbing; 8PM physics
  • STAR leveling at 40KHz
  • Possible 58x58 (28x28) fill Monday             

STAR status

  • Physics running at 5.8KHz. pp200-production-LowLuminosity
    MB-EPD+forward: 910M evts, prescale (7-10); MB-EPD-TOF4: 850M evts, prescale 1
  • VPD: production ID; active splitter checks, no issue discovered but need to find the source of issue (Tim); VPD west ADC (6 MXQ even chs) progressing; 
    raise thresholds to 64 (one channel masked out), east 50; noise persists with HV OFF; commissioning on tune configuration file;
  • sTGC: 
    gain 3mV/HV 2850V/Bit ON, mask two FEEs? Later today

Zero-field Alignment dataset later on 

  • Shift operation. Any issues? 
    Should be stand-down during thunderstorm request information from MCR; APEX mode; 
    AC unit above control roof. Permanent unit arrived, now only temporary unit (doesnot work well). wiring done, work permit? 
  • Shift vacancy issue? Working progress (Cebra)
  • Scaler timing for Polarization monitoring. It’s on the to-do list of Chris.
    3 scaler boards left, #3 ZDC, #5,6 BBC
  • BBC QT in wrong bunch (prepost=-1) in some runs with some boards; production runs; 
  • Remove VPDA from cosmic ray trigger configuration file; 

Plans

  • Spin Trigger and polarimetry commissioning
    production trigger configuration, optimize trigger rates 
    polarimetry scalers; Monday instead, removing cross angle and leveling, with high luminosity configuration file; scaler working; need to check ZDC polarimetry data and thresholds; 
  • a full Maintenance Day on May 15th (Linda Horton’s visit at BNL) 
    Tim is not available for that day?

STAR daily operation meeting 05/08/2024

 

RHIC Schedule

Plan for this week,

  • 111x111 bunch collisions for experiments remaining of the week (mostly).
  • Emittance is large, beam scrubbing for 4 hours; 
    limited success, maybe doing shorter fills (2 hours) to effectively scrub
    during the day and overnight long fills; 
  • Blue injection kicker (arcing and oil blockage) no beam last 24 hours
  • APEX Wednesday 8AM-11PM using yellow beam for now; Thursday Machine Development (10AM-noon)
  • STAR leveling at 40KHz            

STAR status

  • Physics running at 5.8KHz. pp200-production-LowLuminosity
    MB-EPD: 800M evts, prescale (7-10); MB-EPD-TOF4: 850M evts, prescale 1
  • VPD: production ID; active splitter checks, no issue discovered but need to find the source of issue (Tim); VPD west ADC (6 MXQ even chs) progressing; 
    raise thresholds; noise persists with HV OFF; commissioning on tune configuration file;
    DSM VPD to MTD (MT101) =>QT board in the wrong slot, FIXED; Tim/Chris
  • sTGC: 
    gain 3mV/HV 2850V/Bit ON, mask two FEEs? Later today
    increase forward components in MB-EPD and EPD-TOF4

MB-EPD-forward included MB-EPD evts? 
Zero-field Alignment dataset later on 

  • Shift operation. Any issues? 
    Cosmic run #17 this morning, 40-60 auto-configuring during thunderstorm; not in elog;

Should be stand-down during thunderstorm request information from MCR; APEX mode; 
AC unit above control roof. Permanent unit arrived, now only temporary unit (doesnot work well). wiring done, work permit? 

  • Shift vacancy issue? Working progress (Cebra)
    14 worked on, 21 filled,  
  • Scaler timing for Polarization monitoring. It’s on the to-do list of Chris.
    3 scaler boards left, #3 ZDC, #5,6 BBC
  • BBC QT in wrong bunch (prepost=-1) in some runs with some boards; production runs; 
  • ePIC HCAL HV settings and threshold (restricted access);

Plans

  • Spin Trigger and polarimetry commissioning
    production trigger configuration, optimize trigger rates 
    polarimetry scalers; (Thursday/Friday 1-2 hour beam time) Monday instead, removing cross angle and leveling, with high luminosity configuration file; scaler working; need to check ZDC polarimetry data and thresholds; 
  • a full Maintenance Day on May 15th (Linda Horton’s visit at BNL) 
    Tim is not available for that day?


STAR daily operation meeting 05/07/2024

RHIC Schedule

Plan for this week,

  • 111x111 bunch collisions for experiments remaining of the week (mostly).
    1.2x1011 per bunch from about 0.9 earlier
  • Emittance is large, beam scrubbing for 4 hours; 
    limited success, maybe doing shorter fills (2-4 hours) to effectively scrub
    during the day and overnight long fills;  
  • 6-hour access requested by sPHENIX/CeC/CEPS
  • APEX Wednesday 8AM-11PM; Thursday Machine Development (10AM-noon)
  • STAR leveling at 40KHz            

STAR status

  • Physics running at 5.8KHz. pp200-production-LowLuminosity
    MB-EPD: 800M evts, prescale (7-10); MB-EPD-TOF4: 850M evts, prescale 1
  • Readiness checklist update for physics available. Shift crew started to follow normal operation procedure. 
  • VPD: production ID; active splitter checks today (Tim); 
    VPD west ADC (6 MXQ even chs) progressing; if noise persists, raise thresholds; 
    should be done today/tomorrow; commissioning on tune configuration file;
    DSM VPD to MTD (MT101) wrong slots=>QT board in the wrong slot; Tim/Chris
  • sTGC: 
    Access to gain change Monday also change prod ID? 
    HV scan, another iteration yesterday and today; (Tonko/Jeff/Daniel) 
    24 hours afterward final change. 
    gain 3mV/HV 2850V/Bit ON; increase forward components in MB-EPD and EPD-TOF4

MB-EPD-forward included MB-EPD evts? Zero-field Alignment dataset later on 

  • Shift operation. Any issues?
    AC unit above control roof. Permanent unit arrived, now only temporary unit (doesnot work well). wiring done, should be done? 
  • Shift vacancy issue? Working progress (Cebra)
    14 worked on, 21 filled,  
  • Scaler timing for Polarization monitoring. It’s on the to-do list of Chris.
    3 scaler boards left, #3 ZDC, #5,6 BBC
  • BBC QT in wrong bunch (prepost=-1) in some runs with some boards; production runs; 
  • FST reference plots updated;
  • ePIC HCAL HV settings and threshold (restricted access);

Plans

  • Spin Trigger and polarimetry commissioning
    production trigger configuration, optimize trigger rates 
    polarimetry scalers; Thursday/Friday 1-2 hour beam time, removing cross angle and leveling, with high luminosity configuration file; scaler working; need to check ZDC polarimetry data and thresholds; 
  • a full Maintenance Day on May 15th (Linda Horton’s visit at BNL) 
    Tim is not available for that day?

STAR daily operation meeting 05/06/2024

 

RHIC Schedule

Plan for this week,

  • 111x111 bunch collisions for experiments remaining of the week (mostly).
    1.2x1011 per bunch from about 0.9 earlier
  • Emittance is large, beam scrubbing for 4 hours  
  • Potential 2-hour access requested by sPHENIX
  • STAR leveling at 40KHz

                          

STAR status

  • Physics running at 5.6KHz. pp200-production-LowLuminosity
    MB-EPD: 700M evts, prescale (5-7); MB-EPD-TOF4: 700M evts, prescale 1.1—1.5
    possible options to be discussed at triggerboard meeting: leveling at 30KHz, change prescale, short run at 10KHz? BHT3 and BHT3-L2gamma triggers
  • Readiness checklist update for physics available. Shift crew started to follow normal operation procedure. 
  • VPD: tac offset correct (ONLINE), slewing upload and DONE (BBQ); production ID;
    VPD west ADC (5 MXQ chs) progressing; power-cycle the active WEST splitters
  • eTOF included in the run (Geary) 
  • sTGC: 
    Access to gain change Monday also change prod ID? 
    HV scan, another iteration yesterday and today; (Tonko/Jeff/Daniel) 
    24 hours afterward final change. 
    Zero-field Alignment dataset later on 
  • Shift operation. Any issues?
    elog: Water on the floor near restroom near STAR control room. Maybe a new roof leak? Dry it. (leak from the roof) 
    MTD/sTGC gas alarms overnight; gas bottles changed; 
    TPC sector 12 RDO 1 masked out permanently
    setRHICclock after using local clock; 

AC unit above control roof. Permanent unit arrived, now only temporary unit, no cooling, been worked on. 
wiring done, waiting for the permit? 

  • Shift vacancy issue? Working progress (Cebra) 
    5 vacancies coming weeks; no issue this week; 
  • Scaler timing for Polarization monitoring. It’s on the to-do list of Chris.
    3 scaler boards left, #3 ZDC, #5,6 BBC

Plans

  • a full Maintenance Day on May 15th (Linda Horton’s visit at BNL) 
    Tim is not available for that day? 
  • Now to May 14th, a short run to commission trigger rate/background and zdcpolarimetry with high luminosity and zero crossing angle? 


STAR daily operation meeting 05/05/2024

RHIC Schedule

Plan for this week,

  • 111x111 bunch collisions for experiments remaining of the week (mostly).
    1.2x1011 per bunch from about 0.9 earlier 
  • STAR leveling at 40KHz

                          

STAR status

  • Physics running at 5.6KHz. pp200-production-LowLuminosity
    MB-EPD: 510M evts, prescale (5-7); MB-EPD-TOF4: 500M evts, prescale 1.1—1.5
    Offline update on pileup and vertexing (Shengli)
  • Readiness checklist update for physics available. Shift crew started to follow normal operation procedure. removing the “prepare for physics” for BEMC.
  • VPD: tac offset correct (ONLINE), slewing upload today (BBQ); production ID;
    VPD west ADC (5 MXQ chs) progressing; 
  • eTOF included in the run (Geary) 
  • sTGC: 
    Access to gain change Monday also change prod ID? 
    HV scan, another iteration yesterday and today; (Tonko/Jeff/Daniel) 
    24 hours afterward final change. 
    Zero-field Alignment dataset later on 
  • Shift operation. Any issues?
    TPC sector 12 RDO 1 masked out permanently
    run38 last night took 20 minutes; 
    After the beam dump, turn on the VPD and take:
    pedestal_rhicclock_clean with TRG+DAQ+FCS (1k events).  
    AC unit above control roof. Permanent unit arrived, now only temporary unit. 
    wiring done, should be done yesterday? 
  • Shift vacancy issue? Working progress (Cebra) 
  • Scaler timing for Polarization monitoring. It’s on the to-do list of Chris.
    3 scaler boards left, #3 ZDC, #5,6 BBC

Plans

  • a full Maintenance Day on May 14th (Linda Horton’s visit at BNL) 
    Tim is not available for that day. 


STAR daily operation meeting 05/04/2024

 

RHIC Schedule

Plan for this week,

  • 111x111 bunch collisions for experiments remaining of the week (mostly).
  • STAR leveling at 40KHz
  • No CNI Polarization measurement in the middle of a store for now

                          

STAR status

  • Physics running at 5.6KHz. pp200-production-LowLuminosity
    MB-EPD: 320M evts; MB-EPD-TOF4: 310M evts
  • Readiness checklist update for physics available. Shift crew started to follow normal operation procedure. 
  • EPD trigger timing – EPD timing scan DONE.  
    EQ gates need to be updated in the tier1 file (DONE)
  • VPD: tac offset correct (ONLINE), slewing upload today (BBQ); production ID;
    VPD west ADC (5 MXQ chs) progressing
  • sTGC: 
    Access to gain change Monday also change prod ID? 
    HV scan last night, another iteration today;
    24 hours afterward final change. 
    Zero-field Alignment dataset later on 
  • Shift operation. Any issues?  
    TPC sector 12 RDO 1 masked out permanently
    AC unit above control roof. Permanent unit arrived, now only temporary unit. 
    wiring done, should be done yesterday? 
  • Shift vacancy issue? Working progress (Cebra) 
  • Scaler timing for Polarization monitoring. It’s on the to-do list of Chris.
    3 scaler boards left, #3 ZDC, #5,6 BBC
  • Test run of gating grid at different rates (1 minute each) 
    went well, processing data, likely do another one with high luminosity  

 

Plans

  • a full Maintenance Day on May 14th (Linda Horton’s visit at BNL) 
    Tim is not available for that day. 
  • After Flattop is achieved and the  beams have been steered to achieve collisions, take:

         emc-check with  TRG + DAQ + BTOW + ETOW + ESMD + FCS   (50k events)

Once beams reach PHYSICS ON status, turn on detectors according to Detector States Diagram. When detectors are ready, start running [pp200_production_lowlumisoty] with (all triggers included):

        TRG+DAQ+TPX+ITPC+BTOW+ETOW+ESMD+TOF+eTOF+MTD+GMT+FST+sTGC+FCS+L4

For now, BSMD is not included in the production data-taking.


STAR daily operation meeting 05/03/2024

 

RHIC Schedule

Plan for this week,

  • 111x111 bunch collisions for experiments remaining of the week (mostly).
  • Access 13:30—14:30 today
  • Machine Development 14:30—17:00
  • STAR leveling at 40KHz; last fill this morning very high background first 1/2hour 
  • No CNI Polarization measurement in the middle of a store for now

                          

STAR status

  • Physics running at 5.6KHz. pp200-production-LowLuminosity
    MB-EPD: 170M evts; MB-EPD-TOF4: 160M evts
  • Readiness checklist update for physics available. Shift crew started to follow normal operation procedure. 
  • EPD trigger timing – EPD timing scan DONE.  
    EQ gates need to be updated in the tier1 file? 
  • ZDCSMD issue, swap out daughter card#1. DONE!  
  • Previous Issues: 
    • low rate with QT crate issue (done). 
      trigger deadtime at high rate but not an issue at the moment 
    • high lumi at 3KHz, need low deadtime, not an issue
  • VPD: tac zero out. Be ready today for data-taking 
    Done? VPD west ADC
  • sTGC: 
    Access to gain change? Daniel? 
    Zero-field Alignment dataset later on 
  • Shift operation. Any issues?  
    AC unit above control roof. Permanent unit arrived, now only temporary unit. 
  • Shift vacancy issue? Working progress (Cebra) 
  • FST will switch back from 9 time bin to 3 time bin (DONE)

Plans

  • Scaler timing for Polarization monitoring. It’s on the to-do list of Chris.
    3 scaler boards left, #3 ZDC, #5,6 BBC
  • Test run of gating grid at different rates (1 minute each) 11:30AM 
  • a full Maintenance Day on May 14th (Linda Horton’s visit at BNL) 
    Tim is not available for that day. 

·       Once Beams reach FLAT TOP,  
run 
EMC_check with TRG+DAQ+ BTOW+ETOW+ESMD+FCS

·       When MCR issues prepare for dump”, start bringing detectors to the "Preparing beam dump" state and the SL clicks “Prepare to dump”. 
run zdcpolarimetry_2024 with DAQ+TRG
After all detectors are in the safe mode, the SL clicks “Ready to dump”. After beams dumped, stop run. 

STAR daily operation meeting 05/02/2024

 

 

RHIC Schedule

Plan for this week,

  • 111x111 bunch collisions for experiments remaining of the week (mostly).
  • 10Hz Machine development with 12x12
  • Access 10:30—11:30AM today (CEC)
  • STAR leveling at 40KHz

                          

STAR status

  • Physics running at 5.6KHz. pp200-production-LowLuminosity
  • EPD trigger timing – EPD timing scan again. Mike analyzing the data, 
    EQ gates need to be updated in the tier1 file (need to be done)
  • use EPD coincidence replacing BBCA as the main trigger component. EPD has wider gate which can see both signals and background. It works great!
  • ZDCSMD issue, have to swap out daughter card#1, access right now. 
  • Previous Issues: 
    • low rate with QT crate issue (investigation ongoing).
      Chris worked on it, needs elog entries. 
      QT rate seems to be up (4KHz, 40% dead) => 5.6KHz normal deadtime 
  • VPD: tac zero out. Be ready today for data-taking 
    still zero out, and this is not good for the production run 
    NEXT FILL will be fixed. 
  • EPD parameters were left in a different state from earlier in the production configuration last night. 
  • sTGC: time bin cut? Access to gain change? Zero-field Alignment dataset 
  • Readiness checklist update for physics available. Shift crew started to follow normal operation procedure. Prom_Check DAQ+TRG+iTPC once a day (no event, stop run after it starts). 
  • Shift operation. Any issues?  AC unit above control roof. Permanent unit arrived, now only temporary unit. Detector operator (fill in two weeks, visa revoked at airport)

 

Plans

·       Before run official production pp200_production_LowLuminosity, whenever possible, run zdcpolarimetry_2024 with DAQ+TRG

  • FST will switch back from 9 time bin to 3 time bin (today)
  • Scaler timing for Polarization monitoring. It’s on the to-do list of Chris.
  • Test run of gating grid at different rates (1 minute each) 
  • a full Maintenance Day on May 14th (Linda Horton’s visit at BNL)



STAR daily operation meeting 05/01/2024

RHIC Schedule
Plan for this week,

  • 111x111 bunch collisions for experiments remaining of the week (mostly).
  • 10Hz Machine development with 12x12
  • STAR leveling at 40KHz, two short fills this morning

                          

STAR status

  • Physics running started since Yesterday. pp200-production-LowLuminosity
  • EPD trigger timing - one clock late issue resolved. Eleanor requested 4 EPD runs 
    last night. Status? Readout Crossing should be 8 instead of the default 7?
    EQ gates need to be updated in the tier1 file 
  • Once EPD finalized, proposed to use EPD coincidence replacing BBCA as the main trigger component. EPD has wider gate which can see both signals and background. 
  • ZDCSMD issue, Aihong took some pedestal runs and coordinated with Hank. The channel 4 has a high pedestal. ZDCSMD looks good in the fill since last night. There is a single run (25122011) from a short-lived fill this morning in which the same offending channel look a little suspicious, but it is not crazy. Aihong will continue monitoring it. May have to swap out daughter card#1 
  • Previous Issues: 
    • L0L1 turn-off issue (Tim changed fan tray for L0L1 crate 62)
      fixed, no issue so far
    • low rate with QT crate issue (investigation ongoing).
      Chris worked on it, needs elog entries. 
      QT rate seems to be up (4KHz, 40% dead) => 5KHz normal deadtime 
  • VPD: tac zero out. Be ready today for data-taking 
  • Gene analyzed data from Monday/Tuesday for studying the background effect. 
    see effects and need to keep BBC blue+1.7Yellow background <200KHz 
  • sTGC: time bin cut? Access to gain change? 
  • Readiness checklist update for physics available. Shift crew started to follow normal operation procedure. Prom_Check DAQ+TRG+iTPC once a day (no event, stop run after it starts). 
  • Shift operation. Any issues?  
  • Jeff changed the logic how to include forward detectors in the trigger.
  • Production configuration is needed. Jeff will clean up the file. Default configuration:pp200_production_LowLuminosity: done
  • Determine the gain for the polarimeter at the beginning the fill. Jeff has a configuration for this: file is available, will put instruction in the checklist 

 

Plans

  • FST will switch back from 9 time bin to 3 time bin (today)
  • Readiness checklist update for physics available.
  • Scaler timing for Polarization monitoring. It’s on the to-do list of Chris.
  • Test run of gating grid at different rates (1 minute each) 
  • a full Maintenance Day on May 14th (Linda Horton’s visit at BNL)

                                                          STAR daily operation meeting 04/30/2024
                                                      (Period Coordinator change: Kong Tu => Zhangbu Xu)

 

§  RHIC Schedule

4K cool down.

Plan for this week,

·      111x111 bunch collisions for experimental setup overnight.

·      Machine development today (10:00 – 14:00).

·      Crossing angle 1mrad at STAR; leveling at STAR made signal/background ratio very small. Reverted back to without leveling.

                          

§  STAR status

·      Physics running started around 4am this morning. We created a temporary configuration and promoted MB-BBC, MB-TOFmult4, BBC, ZDC, EPD.

·      Global timing moved back 2ns. CAL SCAN was redone.

·      EPD trigger timing - one clock late issue resolved. Eleanor fixed it! A few minor changes should be done. All detectors calibration done. VPD E and W max tac value changed from 1950 to 2100.

·      ZDCSMD issue, Aihong took some pedestal runs and coordinated with Hank. The channel 4 has a high pedestal. Issue is associated with the QT broad. Will discuss in the trigger meeting.

·      Previous Issues:

o   L0L1 turn-off issue (Tim changed fan tray for L0L1 crate 62)

o   low rate with QT crate in issue (investigation ongoing).

·      Gene showed the space charge calibration plot and expressed concern about the space charge calibration.

·      Drilling finished yesterday.

·      Readiness checklist update for physics available. Shift crew started to follow normal operation procedure. (observation: a lot of not experienced shift crew and new trainee this year.)

·      Shift operation. One of the DO failed the training exam (Oxygen Deficiency training) multiple times. The DO had to contact training coordinator and just showed up with training finished.

 

§  Plans

·      Gene will analyze the first run this morning for studying the background.

·      VPD: tac alignment will be next when we have collisions.

·      FST will switch back from 9 time bin to 3 time bin.

·      Jeff will change the logic how to include forward detectors in the trigger.

·      Production configuration is needed. Jeff will clean up the file. Default configuration: pp200_production_LowLuminosity

·      Determine the gain for the polarimeter at the beginning the fill. Jeff has a configuration for this.

·      Readiness checklist update for physics available.

·      Scaler timing for Polarization monitoring. It’s on the to-do list of Chris.

·      a full Maintenance Day on May 14th (Linda Horton’s visit at BNL)

STAR daily operation meeting 04/29/2024

 

 

§  RHIC Schedule

4K cool down.

Plan for this week,

·      111x111 bunch collisions for experimental commission overnight.

·      Maintenance today 08:00 to 17:00; machine development Tuesday.

                          

§  STAR status

·      The VPD earlistTAC was chopped off at 1950 (max). Since Run 25119110, global timing was moved by 2.5 ns early (Find delay 117 to 112). We might need another CAL SCAN to see with collisions. Endcap needs to scan anyway.

·      EPD trigger timing - one clock late, status: Eleanor find a blank VT201. ZDC, BBC, VPD need to revert back to original parameters.

·      ZDCSMD issue (west horizontal channel 4 was hot) and power cycle MXQ crate didn’t work. ZDCSMD gate scanning done, and default values are not changed. Hank: take another Ped run before evaluating this.

·      Running since last evening, pp_200_commissioning. Details about elevating to physics triggers will be discussed at the Trigger Board meeting.

·      L4 calibration. Diyu has received the calibration file from Gene from Run 15. Will investigate.

·      Drilling seems to be in the IR only. All evaluation were done. Lijuan: we should have this discussion early for next time.

·      VPD: tac alignment will be next when we have collisions. Will redo the voltage scan too. Call Daniel and Frank.

·      Previous Issues:

o   L0L1 turn-off issue.

o   low rate with QT crate in issue (investigation ongoing).

 

§  Plans

·      When we have beams tonight, call Oleg, VPD (Daniel. B), EPD (Maria, Mike), Prashanth, Akio.

·      Put in sTGC later today.

·      FST will switch back from 9 time bin to 3 time bin.

·      Determine the gain for the polarimeter at the beginning the fill. Jeff has a configuration for this.

·      Maintenance (access) today: 1) FCS moving in; 2) EEMC 5S2 box check and burp (Will J provided instruction and Prashanth received it); 3) Possible EPD air intake diverter; 4) L0L1 Crate work (Tim is planning to change the fan tray and change the voltage setting.) 5) Concrete drilling for ePIC (after 9 am).

·      Crossing angle of 1mrad to be added after all calibrations or close to physics.

·      Noise run should be taken.

·      Readiness checklist update for physics today.

·      Scaler timing for Polarization monitoring. It’s on the to-do list of Chris.

·      a full Maintenance Day on May 14th (Linda Horton’s visit at BNL)

STAR daily operation meeting 04/28/2024

 

 

§  RHIC Schedule

4K cool down.

Plan for this week,

·      56x56 bunch collisions for experimental commission continue.

·      111x111 bunch collisions later tonight.

·      Maintenance day 08:00 to 17:00 Monday; machine development Tuesday.

                          

§  STAR status

·      BBC, EPD (timing and bias scan), VPD, EMC are all commissioned. EPD still needs this trigger work - one clock late (trigger group will look into it).

·      Running since last evening, pp_200_commissioning, with MB trigger (BBC+TOF0) and high multiplicity trigger (with TOFMult4 > 8 for QA purposes for now). Fast Offline data has been requested and running for Prithwish, Shengli, et al. Shengli already produced QA plot which looks reasonable. Discussion tomorrow at the Trigger Board meeting.

·      L4 issue seems to be improved by Diyu with space charge calibration update [1]! Flemming suggested Diyu to consult with Gene about the pp 200 parameter for space charge calibration. Currently DCAz still looks strange.

·      Previous Issues: 1) L0L1 turn-off issue, 3) low rate with QT crate in issue (not solved yet). Update from Jeff, Tim, Hank (after yesterday 11:30am discussion)?

·      Aihong ZDCSMD work finished and currently still analyzing the data.

 

§  Plans

·      Crossing angle of 1mrad to be added after all calibrations or close to physics.

·      Maintenance (access) tomorrow: 1) FCS moving in; 2) EEMC 5S2 box check and burp (Will J provided instruction and Prashanth received it); 3) Possible EPD air intake diverter; 4) L0L1 Crate work (Tim is planning to change the fan tray and change the voltage setting.) 5) Concrete drilling for ePIC (after 9 am).

·      Triggers promoted to physics discussion at Trigger Board meeting tomorrow.

·      Readiness checklist update for physics next week.

·      Polarization monitoring. It’s on the to-do list of Chris.

·      a full Maintenance Day on May 14th (Linda Horton’s visit at BNL)

STAR daily operation meeting 04/27/2024

 

 

§  RHIC Schedule

4K cool down.

Plan for this week,

·      56x56 bunch collisions for experimental commission started around 12:30am.

·      New store started at 7am.

·      Detector commission overnight and continue this weekend.

                          

§  STAR status

·      No access.

·      ZDC, BBC, VPD DSM timing are calibrated (was one tick late), while EPD still needs this timing calibration (Chris will work on it). VPD-tac offset was restored to last year’s value instead of zero.

·      We observed the strange vertex z distribution on L4 but not L3 [1]. Diyu: calibration of the TPC? pp 500 parameters are used. Going to look at correlation between multiplicity and vertex distribution.

·      Previous Issues: 1) L0L1 turn-off issue, 2) L2 crashing with prepost issue (fixed), 3) low rate with QT crate in issue (not solved yet). Tim and Jeff left instructions to the shift crew for L0L1 issue. Issue 1): Under voltage turned off the crate. David will communicate with Tim, Jeff, etc. Hank will ask Jack about the voltage setting.

·      Forward detectors, not running yet. Will include them soon.

·      Finished: Cal Scan (Oleg) within 1ns w.r.t last year, BBC (Akio), EPD ongoing (Maria). Global time can be set.

 

§  Plans

·      Aihong should look at the ZDCSMD.

·      Continue trigger commission: EPD (Maria, Mike),VPD (Geary, Daniel Brandenburg, Frank).

·      VPD HV scan.

·      To shift crew: ETOF and MTD HV should OFF instead of STANDBY.

·      Polarization monitoring. It’s on the to-do list of Chris.

·      Plan after the trigger detector commission later today: 1) BBC-AND + TOF > 0 as MB and/or maybe 2) BBC trigger + mult>20 (to start with); all configurations should have the crossing angle.

·      Readiness checklist update for physics next week.

·      ½ day (9-1PM? Prashanth will find out and keep us posted on staropsa) of Maintenance on April 29th (Monday) and a full Maintenance Day on May 14th (Linda Horton’s visit at BNL)

 

§  When we have access.

·      (access needed) L0L1 crate shut off and check PS.

·      (access needed) EPD cooling: fan blowing to the fee box needs to be improved. Prashanth is working on it.

·      (access needed) ESMD issue, crate 85? Will: the issue is on the electronics or could be water flowing, but not understood yet. Someone can try to clear the bubbles by removing the quick release, etc…Prashanth/Will will email Bill to follow this up and try when we have access again.

STAR daily operation meeting 04/26/2024

 

 

§  RHIC Schedule

4K cool down.

Plan for this week,

·      Beam setup last night and 12x12 store for experimental setup early this morning. Collisions!

·      Global timing looked good, but the beam condition is not good with large background (see vertex z distribution run 25117023)

·      Continue beam setup in the AM, and more experimental commission in the PM and over the weekend.

                          

§  STAR status

·      No access.

·      There are three issues: 1) L0L1 turn-off issue, 2) L2 crashing with prepost issue, 3) low rate with QT crate in issue. Chris worked on Issue 2) and it seems to be fixed. Status: stable for 1.5 days. Jeff: 1) happened once this morning due to “under voltage error 43”. David will look into the alarm system. Tim: could be PS. (will need access)

 

§  Plans

·      Trigger commission: Prepost (Chris), EPD (Maria, Mike), TOF, BBC (Akio), VPD (Daniel Brandenburg, Frank, Geary). We will have a call list when we have collisions, e.g., JH, Akio.

·      VPD HV scan.

·      Shift crew should pay attention to the events coming instead of deadtime.

·      Shift crew should look at the issue from the VME crate.

·      APEX mode for running single beams with the tune file.

·      Readiness checklist update for physics next week.

·      ½ day of Maintenance on May 1st (Wednesday) and a full Maintenance Day on May 14th (Linda Horton’s visit at BNL)

 

§  When we have access.

·      (access needed) L0L1 crate shut off and check PS.

·      (access needed) EPD cooling: fan blowing to the fee box needs to be improved. Prashanth is working on it.

·      (access needed) ESMD issue, crate 85? Will: the issue is on the electronics or could be water flowing, but not understood yet. Someone can try to clear the bubbles by removing the quick release, etc…Prashanth/Will will email Bill to follow this up and try when we have access again.

 

STAR daily operation meeting 04/25/2024

 

 

§  RHIC Schedule

4K cool down.

Plan for this week,

·      Beam setup last night.

·      First collision is expected to be this evening (maybe 6x6 bunches).

·      ½ day of Maintenance on May 1st (Wednesday) and a full Maintenance Day on May 14th (Linda Horton’s visit at BNL)

 

§  STAR status

·      No access.

·      There are three issues: 1) L0L1 turn-off issue, 2) L2 crashing with prepost issue, 3) low rate with QT crate in issue. Experts will investigate them.

·      When we have access.

o   (access needed) EPD cooling: fan blowing to the fee box needs to be improved. Prashanth is working on it.

o   (access needed) ESMD issue, crate 85? Will: the issue is on the electronics or could be water flowing, but not understood yet. Someone can try to clear the bubbles by removing the quick release, etc…Prashanth/Will will email Bill to follow this up and try when we have access again.

 

§  Plans

·      Trigger configuration for low lumi pp will be provided by Jeff.

·      Trigger commission: EPD (Maria, Mike), TOF, BBC (Akio), VPD (Daniel Brandenburg). We will have a call list when we have collisions, e.g., JH, Akio.

·      Will check the duration of the run.

·      Shift crew should look at the issue from the VME crate.

·      APEX mode for running single beams with the tune file.

·      Readiness checklist update for physics next week.

·      May 14th (Tuesday), RHIC will open for Linda H (DOE office of science)

 

STAR daily operation meeting 04/24/2024

 

 

§  RHIC Schedule

4K cool down.

Plan for this week,

·      Blue injection kicker and PS work resolved/finished! Beam setup last night.

·      First collision is expected to be tomorrow evening or Friday.

·      ½ day of Maintenance on May 1st (Wednesday) and a full Maintenance Day on May 14th (Linda Horton’s visit at BNL)

·      Emergency power test. Prashanth: 10:30am, Wednesday.

 

§  STAR status

·      No access.

·      Jeff and Chris worked on the L0L1 and L2 issue and confirmed the cosmic configuration with prepost enabled also crashed the L2 and L0L1. Update? Also, Run-25114053 around 19:30, L2 and L0 crashed (tune_2024_prepost) and shift crew brought it back by following the expert’s instruction. Related? Experts baffled.

·      When we have access.

o   (access needed) EPD cooling: fan blowing to the fee box needs to be improved. Prashanth is working on it.

o   (access needed) ESMD issue, crate 85? Will: the issue is on the electronics or could be water flowing, but not understood yet. Someone can try to clear the bubbles by removing the quick release, etc…Prashanth/Will will email Bill to follow this up and try when we have access again.

 

§  Plans

·      Trigger configuration for low lumi pp will be provided by Jeff. Trigger commission: EPD (Maria, Mike), TOF, BBC (Akio), VPD (Daniel Brandenburg). We will have a call list when we have collisions.

·      APEX mode for running single beams with the tune file.

·      Readiness checklist update for physics next week.

·      May 14th (Tuesday), RHIC will open for Linda H (DOE office of science)

STAR daily operation meeting 04/23/2024

 

 

§  RHIC Schedule

4K cool down.

Plan for this week,

·      Blue injection kicker and PS work continue. Beam setup early afternoon.

·      First collision is expected to be delayed due to the ongoing works and checks.

·      Maintenance on May 1st (Wednesday).

·      Emergency power test next week. Prashanth: 10:30am, Wednesday.

 

§  STAR status

·      No access.

·      L2 seems to be running fine with prepost in tune configuration. (Hank and his team will investigate, as previous interpretation didn’t seem to explain). Jeff will do it when we have beams.

·      L0L1 VME crate crashed when running the tune_2024_prepost. We will keep an eye on it.

·      Same as yesterday.

o   (access needed) EPD cooling: fan blowing to the fee box needs to be improved. Prashanth is working on it.

o   (access needed) ESMD issue, crate 85? Will: the issue is on the electronics or could be water flowing, but not understood yet. Someone can try to clear the bubbles by removing the quick release, etc…Prashanth/Will will email Bill to follow this up and try when we have access again.

 

§  Plans

·      Jeff will investigate the system with L0/L1 and L2 when there’s beam activity.

·      APEX mode for running single beams with the tune file.

·      Readiness checklist update for physics next week.

·      May 14th (Tuesday), RHIC will open for Linda H (DOE office of science)

STAR daily operation meeting 04/22/2024

 

RHIC Schedule

4K cool down.

Plan for this week,

  • Blue injection setup last night and saw some beam activities! 
  • Blue main quad PS diagnostics continue and with some other issues to investigate. Some checks will be done with Yellow but most or less ready for injection.
  • First collision expected April 23-25 (?)
  • Maintenance on May 1st (Wednesday).
  • Emergency power test next week, but not sure what day yet.

 

STAR status

  • No access, but we may go to restricted access later today.
  • BTOW hot spots. No action needed from shift crew.
  • L2 seems to be running fine with prepost in tune configuration. (Hank and his team will investigate, as previous interpretation didn’t seem to explain).
  • L0L1 VME crate crashed when running the tune_2024_prepost. Experts may look into it.
  • EPD is on now. EPD reduced the QT threshold to 3, and can bring it back to 35 after the scan. 
  • Same as yesterday but we may have access later. 
    • (access needed) EPD status: Mike: there are yellow and right lights on the TUFF box, need to look into what they mean. (Everything is fine.)
    • (access needed) EPD cooling: fan blowing to the fee box needs to be improved. Prashanth is working on it. 
    • (access needed) ESMD issue, crate 85? Will: the issue is on the electronics or could be water flowing, but not understood yet. Someone can try to clear the bubbles by removing the quick release, etc…Prashanth/Will will email Bill to follow this up and try when we have access again. 
    • FST time bin shift. Ziyue will do it (Monday) and keep the shift crew posted. Jeff will adjust some setting and will document it on shiftlog.

 

Plans

  • Jeff will investigate the system with L0/L1 and L2. 
  • Whether VPD should be on or off during single beam, experts will monitor and advise.
  • APEX mode for running single beams with the tune file. 
  • Readiness checklist update for physics next week.
  • May 14th (Tuesday), RHIC will open for Linda H (DOE office of science)

STAR daily Operation meeting 04/21/2024

 

 

RHIC Schedule

4K cool down.

Plan for this week,

  • Blue injection failed yesterday again and continue today.
  • Yellow will depend on the blue beam schedule. First collision expected April 23-25.
  • Maintenance on May 1st (Wednesday).
  • Emergency power test next week, but not sure what day yet (see Prashanth’s email)

 

STAR status

  • Back to controlled access.
  • Same as yesterday. 
    • (access needed) EPD status: Mike: there are yellow and right lights on the TUFF box, need to look into what they mean. 
    • (access needed) EPD cooling: fan blowing to the fee box needs to be improved. Prashanth is working on it. 
    • (access needed) ESMD issue, crate 85? Will: the issue is on the electronics or could be water flowing, but not understood yet. Someone can try to clear the bubbles by removing the quick release, etc…Prashanth/Will will email Bill to follow this up and try when we have access again. 
    • FST time bin shift. Ziyue will do it (Monday) and keep the shift crew posted. 
  • L2 died couple of times and experts (Hank, Akio, Eleanor) helped fixed it (eliminate the prepost seemed to fix the issue). Christ will look into it. New instruction here, https://www.star.bnl.gov/public/trg/trouble/L2_stop_run_recovery.txt). 

 

Plans

  • Jeff will make a different tune file without prepost for now. Run with prepost as the default tune file, but without it will be the plan B. 
  • Cosmic data taking with Reverse Full field.
  • Can use APEX mode for running single beams. 
  • Readiness checklist update for physics next week.
  • May 14th (Tuesday), RHIC will open for Linda H (DOE office of science)

STAR daily Operation meeting 04/20/2024

 

 

§  RHIC Schedule

4K cool down.

Plan for this week,

·      Blue injection failed yesterday and continue today.

·      Yellow PS checkout over the weekend (controlled access) and injection on April 22 (Monday); first collision expected April 23-25.

·      Maintenance on May 1st (Wednesday).

·      Emergency power test next week, but not sure what day yet (see Prashanth’s email)

 

§  STAR status

·      BEMC HV fixed.

·      Back to restricted access.

·      (access needed) EPD status: Mike said it’s still a mystery and will have someone look at the lights on the EPD rack in the Hall (DO just did). Tonko made a comment on starops and mystery seems to be resolved. Mike: there are yellow and right lights on the TUFF box, need to look into what they mean.

·      (access needed) EPD cooling: fan blowing to the fee box needs to be improved. Prashanth is working on it.

·      (access needed) ESMD issue, crate 85? Will: the issue is on the electronics or could be water flowing, but not understood yet. Someone can try to clear the bubbles by removing the quick release, etc…Prashanth/Will will email Bill to follow this up and try when we have access again.

·      (access needed) sTGC air blower alarm seems to have issues. Tim fixed it!

·      L2 died and experts instructed the correct way of bringing L2 back (MXQ message suggests a link, and experts are looking to see if it is updated. https://www.star.bnl.gov/public/trg/trouble/L2_stop_run_recovery.txt ). Hank will update the instruction.

·      FST time bin shift. Ziyue will do it (Monday) and keep the shift crew posted.

 

§  Plans

·      Shift crew  Check online plots timely.

·      Cosmic data taking with Reverse Full field.

·      Can use APEX mode for running single beams.

·      Readiness checklist update for physics next week.

·      May 14th (Tuesday), RHIC will open for Linda H (DOE office of science)

STAR daily Operation meeting 04/19/2024

 

 

RHIC Schedule

4K cool down.

Plan for this week,

  • Blue and Yellow PS checkout and injection into blue? on April 19 (today).
  • Yellow PS checkout over the weekend (controlled access) and injection on April 22 (Monday); first collision expected April 23-25.
  • Maintenance on May 1st (Wednesday).

 

STAR status

  • BEMC HV. Oleg: today, may need Tim to make cables to finish the work, depending on EPD.
  • sTGC air blower alarm seems to have issues. Tim will look into it when possible.
  • EPD status: Tim removed East EPD TUFF box from hall. Tim: reinstalled this morning. Tonko can take a look. 
  • EPD cooling: fan blowing to the fee box needs to be improved. Prashanth is working on it. 
  • ESMD issue, crate 85? Shift crew had asked, and Will. J said he will work on it today. Will: the issue is on the electronics or could be water flowing, but not understood yet. Someone can try to clear the bubbles by removing the quick release, etc…Prashanth/Will will email Bill to follow this up and try when we have access again. 
  • L2 had died and brought back, but L4 is still dead (the machine died). L4 issue we should call Diyu. L4 should be back now. Will give an updated instruction to deal with this for shift crew.
  • TPC hot spots. Experts, please investigate. Flemming: tpc, there are 2 hotspots always there. itpc, there are hotspots come and go. Will update the reference plots Monday.
  • Jeff already made a tune file. Jeff will put one prepost in BBC, EPD, ZDC etc. 
  • BCW has been running fine since yesterday.

 

Plans

  • Will clarify the schedule for emergency power tests with RHIC.
  • Cosmic data taking with Reverse Full field.
  • Can use APEX mode for running single beams.
  • Readiness checklist update for physics next week.
  • May 14th (Tuesday), RHIC will open for Linda H (DOE office of science)

STAR daily operation meeting 04/18/2024

 

 

  • RHIC Schedule

4K cool down.

Plan for this week,

  • 26 GeV PS test tonight (STAR can stay open during this test)
  • Blue beam (100 GeV) injection on April 19 evening (tomorrow evening).
  • Yellow beam injection on April 21 (Sunday); first collision expected April 23-25.

 

  • STAR status
  • Maintenance: only TPC water work (temperature sensor) was done, but other tests were postponed to next Wednesday (April 24). 
  • sTGC air blower alarm seems to have issues, but the temperature is stable and has been included in the run. Tim will check this. 
  • BTOW issue with crate id 26 and VME 24. Oleg fixed it and he will put back the BSMD after this meeting. 
  • EPD status: Tim needs to replace TUFF box (aim for tomorrow.) Cooling: fan blowing to the fee box needs to be improved. 
  • ESMD issue, crate 85? Status? 
  • Will check to include ETOW in the run. 

 

  • Communications:
  • Call MCR for communications to CAS.

 

  • Plans
  • Cosmic data taking with Reverse Full field.
  • Readiness checklist update for physics next week.
  • Akio wants to be on call list for beam tomorrow evening. 
  • Jeff will make a tune file today.
  • BCW – need issue reports. Shift crews need to document all actions.
  • May 14th (Tuesday), RHIC will open for Linda H (DOE office of science)

 

STAR daily Operation meeting 04/17/2024

 

 

§  RHIC Schedule

 

4K cool down.

Plan for this week,

·      Blue beam injection on April 19 (Friday).

·      Yellow beam injection on April 21 (Sunday)

·      First collision expected April 23-25.

 

§  STAR status

·      Power dip last night. Subdetectors were brought back on except a few issues:

o   sTGC air blower.

o   TPC air blower, Alexei will look into it with help from Tim.

o   BEMC is back, and EEMC CANBUS are down, no control?

o   Some works need to be done for the gas system of MTD. MTD can still be operated safely.

 

·      Mike Lisa: EPD seemed to have issue with TUFF box and bad voltages. Shift crew turned EPD off during the evening shift. Mike turned them on this morning, and Tim needs to take a look. Cooling will be added to the FEE box.

·      Geary: ETOF instruction was reminded on starops. Will remind shift crew and include ETOF for noise run later today after maintenance.

·      Eleanor: fixed the BCW and gave instructions to the shift crew.

·      Will J: EEMC chiller status and how to turn things off during the power tests. This is already noted.

·      ESMD issue, crate 85? Tim will try to fix it after the power tests.

·      Flemming: requested special run for TPC and was taken during the evening shift, Run 25107059

·      RHIC status computer on shift leader desk (Jim Thomas sent an email to Angelika for username and password)

·      CAS will come taking down the magnet.

 

§  Plans

·      Downtime (10:30-17:00) today. Emergency power test, magnet power test, MCW maintenance (part change, postponed to next week!), TPC water maintenance (temperature sensor)

·      Cosmic data taking with Reverse Full field.

·      Detector status update.

·      Readiness checklist update for physics next week.

·      Power dip recovery instruction needs to be reprinted.


RHIC/STAR Schedule
 [calendar]
 

2023 ops meeting notes

Notes from STAR Operations Meeting, Run 23

August 7, 2023

RHIC Plan:

Shutdown early.


Notable items/recap from past 24 hours:

Day shift: Cosmics

“Magnet trimWest tripped again”

Evening shift: Cosmics

“Expert managed to bring the magnet back around 17:05."

Owl shift: Cosmics

 “Smooth cosmics data taking during the whole night, no issues.”

Other items:

“I stopped TPC gas system ~8:10 at circulation mode and started high Ar flow. Magnet is down.”

“I started N2 flow for TOF, MTD and eTOF systems.”

“We turned off EPD and currently we are turning off VME crates”

“I powered down btow & gmt01 DAQ PCs. For now.”

Tonko will shut down iTPC and TPX after the meeting (leaving 1 for tests). Schedule time with Christian for maintenance.

Jeff will keep 1 or 2 evbs up but tomorrow will shut the rest down.

Cosmics summary: 17% runs bad. Final count: 51M (1.8x what Yuri wanted)

Shifters need to stay until end of morning shift (and help experts with shutdown). Officially cancel evening shift.


August 6, 2023

RHIC Plan:

Shutdown early.


Notable items/recap from past 24 hours:

Day shift: Cosmics

“Magnet trimWest tripped, called the CAD, they will try to bring it back” - no details

“Now, FST is completely shut down.”

“Alexei arrived, he solved the TPC oxygen alarm (gap gas O2) and confirmed that west laser does not work.” - will work on it tomorrow; will look at east laser today

Evening shift: Cosmics

“Magnet trimWest tripped. called the CAD.”

“Power dip and magnet dip around 10 PM."

“TR[G] component are blue but when all the components are included, the run won't start. When only include bbc and bbq, the run can start but DAQ Evts stays zero. DAQ: multiple VMEs are bad including VME1, we masked out all the bad VMEs.”

Owl shift: Cosmics

“L0 seem to have some issues, as Tonko also noted in the ops list; we rebooted the L0L1 VME, but still could not start a run after that, the daq was stuck in the configuring stage.”

Other items:

“GMT gas bottle was changed.”

“Alarm handler computer was completely stuck, we had to hard restart the machine.”

“We powercycled L0 crate once more and tried to run pedAsPhys with TRG + DAQ only and it worked.”

“Trigger rates were high, I called Jeff and he helped me to realize that majority of trigger nodes was taken out and I need to include them.”

5 hours of good cosmics (25/30M so far, ~1M/hr) — tomorrow morning will communicate with SL and start purging first thing in the morning assuming we hit the goal. If detector is not part of cosmic running, start earlier. sTGC will be done Monday.

Advice to shifters: cycle VME a few times. After 3 or 4 something might be wrong.

Tomorrow after end of run will turn off all trigger crates; all flammable gases.


August 5, 2023

RHIC Plan:

Shutdown early. (See email forwarded to STARmail by Lijuan at 3:30 PM yesterday for more details.)


Notable items/recap from past 24 hours:

Day shift: Cosmics

“Magnet is ramped up.”

“Temperature in the DAQ room is low enough, Tonko and Prashanth brought brouth machines back. Moving cooler in the DAQ room is turned off so the repaircrew repaircref could monitor how the AC runs”

“We turned on TPC, TOF, MTD and GMT for the cosmics”

“Tried to include L4 to the run, l4evp seems to be off”

“Alexei fixed the laser, both sides now work.”

Evening shift: Cosmics

“Will Jacobs called that he turned off the EEMC HV and LV to the FEE. We should leave EEMC out of the running over the weekend.”

"Trim west magnet tripped around 7:30 PM, called 2024 at 10:00 PM. They brought back the trim west magnet.” (Will follow up this evening) — these runs were marked as bad

Owl shift: Cosmics

“West camera is not showing anything” (Flemming sees no tracks) → “Both sides were working for us”

Other items:

Need to make sure shifters don’t come.


August 4, 2023

RHIC Plan:

Decision coming later today (fix starting in a week and resume vs. end and start early [STAR’s position]). Once official, will inform next shift crews.


Notable items/recap from past 24 hours:

Day shift: No data

“Magnet polarity is switched but the magnet is not ramped up yet.”

“MIX VME seems to have some hardware problem” -> fixed during the evening shift [Tim power cycled and cleared a memory error on the fan tray]

Evening shift: No data

“Nothing to report”

Owl shift: No data

“Nothing to report”

Other items:

Magnet up → waiting for DAQ room AC to be fixed this morning (hopefully) [UPDATE: fixed] → DAQ room computers turned back on → cosmics for 1.5-2 days → end Monday and purge → week after next, things coming down

Looks like we’re out of water again in the trailer


August 3, 2023

RHIC Plan:

No official decision yet. Likely end of tomorrow. Nothing changes (shift crews, etc.) until we have that info.


Notable items/recap from past 24 hours:

Day shift: No physics

Travis: “calibrated star gas detection system”

“etof_daq_reset command now works”

“FST Cooling was refilled. Reservoir level was filled from 66.6% to 90.4%. Swapped from pump 2 to pump 1.”

“We turned the detectors to save stages to prepare for the transfers switch test. Magnet is ramping down right now.” -> “The test is done and VMEs are back with David's help.”

“To reduce heat load while the DAQ Room A/C is offline, I'm starting to shutdown DAQ computers at this time (almost everything in the DA Rack Row is a candidate for shutdown).”

“DAQ computers which were shutted down by Wayne: tpx[1-36] except tpx[14] which is not remotely accessible (Dropped out of Ganglia at ~12:40 pm - possible hardware failure?); itpc[02-25]; fcs[01-10]; EVB[02-24]

Tim: “Replaced QTD in EQ3 with the non used QTD in EQ4”

“BCE crate: DSM1 board in slot 10 (Id9) and slot 11 (Id10) are swapped. Board address changed accordingly.”

Evening shift: No physics

Tonko “shut down even more DAQ machines; all stgc, all itpc, all tpx, all fcs, all fst, tof, btow,etow.”

Jeff and Hank fixed the trigger problems mentioned last time.

SL had a medical emergency and was transported to hospital. Thanks to Daniel for coming a bit early to take over. I will take her shift tonight.

Owl shift: No physics

Nothing to report

Other items:

Magnet polarity flipping today: 2 - 3 hours starting now. Will run cosmics for 1.5 - 2 days.

AC work yesterday, ongoing today. DAQ room still hot. Will not turn on unless this is fixed.

Just use TPC, TOF, MTD, BEMC


August 2, 2023

RHIC Plan:

Today: maintenance. Tomorrow - rest of run: ?


Notable items/recap from past 24 hours:

Day shift: Smooth physics runs + cosmics

At about 12:30, helium leak at 4 o’clock (blue — fixed target not possible either). Developing situation — may get the decision to end the run within the next few days. JH advocates for reversing polarity for two days after this maintenance before ending (because we couldn’t get it done before/during the run). STAR PoV: data-taking efficiency, machine status — best benefit from shutting down, save funds for next year. 4 months between end of this one and beginning of next one. Discussion point raised by Lijuan: how long do we need for cosmic data taking? Switch polarity immediately after maintenance for 2 to 3 days. Prashanth will talk to Jameela. When polarity is switched, Flemming will talk to Yuri.

Evening shift: Cosmics

“MCR called that due to the failure they won't be staffed over the night. In case anything happens, we need to call 2024”

Owl shift: Cosmics

“There was alaram in VME in first floor platform (cu_vme62_minus12volts, cu_vme62_plus12volts, cu_vme62_plus5volts & cu_vme62_fanspdm_nms). So we have turned on VME62 in first floor platfrom control. and alaram stops.”

“we had `L1[trg] [0x8002] died/rebooted -- try restarting the Run` critical message in the DAQ, then lots of `Error getting event_done client socket` messages. Also, vme-62_lol1 alarm sounded, DOs restarted crate. We rebooted all in the DAQ, then did the etof restart procedure as well.”

Summary: “had daq issues which we were not able to solve during the night, trigger was showing 100% dead (see details in shiftlog). We tried rebooting crates, first only BBC, then all of them one by one, but it did not solve the issue.” — Ongoing problem…To make sure TCD is ok do pedasphys_tcdonly w/ trigger and daq. Tonko thinks something is wrong with BBC.

Other items:

Modified ETOF procedures in detector readiness checklist and printed out/uploaded new ones (ETOF critical plot instruction, Canbus restart procedure also updated)

Should crate 54 still be out? — 54 is part of the old GG (control). And can be left off, yes.

Accesses? Tim for EQ3-QTD, Gavin: “Te-Chuan and I plan to refill the FST cooling system during the access tomorrow.” Alexei: west laser. Tim&Christian swapping BE-005, BE-006 to isolate missing 10 trigger patches which come and go.

Will make a list of detectors needed for cosmics and reduce shift staffing now. SL can decide (SL+DO minimum until gas watch mode).

Daq room temperature going up while AC is being worked on today.


August 1, 2023

RHIC Plan:

Today: physics. Wednesday: maintenance (7:00 - 16:00). Thursday - Monday: physics.


Notable items/recap from past 24 hours:

Day shift: Cosmics + mostly smooth physics running

“We tried to powercycle EQ3 crate and reboot trigger, the purple parts in the EPD plots belong to eq3_qtd and the orange to eq3.” — EQ3 problem seems to be fixed. EQ3_QTD problem won’t be until the board is swapped. Pedestals were not being subtracted correctly when qtd died

Evening shift: Cosmics + physics

“Two attempts for injection had failed at late stages; and a third one made it to the PHYSICS ON, but it lasted only for almost a couple of hrs”

Owl shift: Mostly smooth physics running

“ETOF critical plot had a new empty strip in Run 24213007, after run was stopped DOs followed the restart instructions, we rebooted ETOF in the daq [etof_daq_off], critical plots look fine in Run 24213008. Note: it should be clarified if this is indeed the right thing to do, because it takes more than 5 minutes between the runs which could be used for data taking.” — should be done between fills, as instructions say. Update: SL wrote an entry in the shift log clarifying the ETOF procedures.

“The very first physics run of the new fill (Run 24213004) was a complete 45 minute run without any noticable issue, however, strangely it only shows about 244K events (much less compared to the usual ~10M). Also, Run 24213012 was a complete 45 minute run, and it shows about half of the expected events, around 4.5M”. Database issue? Rate was fine. Talk to Jeff (out for the week). Flemming: if run is marked as good before counting is finished, shows a wrong number.

Other items:

“we just started the last big water bottle”

Another medical issue with SL trainee (SL starting today), but will hopefully not miss any shift.

“L3 Display: strange issue with lots of tracks [clusters?] at 7 o'clock in some events” (changeover checklist from owl shift) [check 24212006]

Large beta* test for sPHENIX (normal for STAR) with 12 bunches, lower lumi. Normal physics run after that. Update: sPHENIX requested no-beam time after that normal fill for 4 hrs.

Accesses tomorrow: Tim [removing bad board, EQ4 put in]


July 31, 2023

RHIC Plan:

Today-Tuesday: physics. Wednesday: maintenance


Notable items/recap from past 24 hours:

Day shift: Cosmics

"eq3_qtd is still out” — affects EPD. Hank is looking. Christian swapping in qtd or taking out of eq4 which is not being used and configuring fine (during Wednesday’s maintenance). Up to Hank. Haven’t heard back from Chris this morning.

ETOW: “_crate_ 1 lost its ID and so results that crate are junk.”

“sTGC yellow alarm for pentane counter, called Prashanth. He said that we should monitor it and if it changed rapidly, we should cal him again.”

Evening shift: Physics

“PHYSICS is ON @ 7:40 pm. Finally”

“low luminosity as it is almost 6.5 kHz at STAR ZDC.” — voted to dump. Refilled with higher rates ~ 13 kHz.

Owl shift: Physics

“Stopping the run did not succeed, attached is the trigger status (everything is in ready state on the webpage, including trigger)” “[E?]Q2 was in an incorrect state, it was at least a communication issue, and EQ2 needed a reboot, which could have been tried from the slow controls GUI (1st floor control platform), but Jeff did it from the command line. He also said in such a case (after realizing this is a trigger issue) a trigger expert could also have been contacted.” — procedure: reboot, power cycle if necessary, call Hank.

“There are two empty bins in BTOW HT plot. We saw it earlier today, too. This issue seems to come and go.” — be005 blank. No idea of cause of problem or of recovery right now.

“TPC:The iTPC #cluster vs sector QA plot has a hot spot for sector 19 (attached). This issue has persisted since the beginning of this fill (run 24211047)” — max # of clusters is a bit smaller in that sector. Has been going on the whole run and is not an issue.

“DO switched Freon line from line A to line B following an alarm that said that the pressure went below 5 psi.”

Other items:

Shifters doing better; one DO trainee returned to shifts, one may return today. Both seem set to assume their duties as DOs next week, with affirmative statements from their SLs.

Methane: identified methane source — 18 cylinders before running out, good for rest of run. (Also 2 bottles from national labs).


July 30, 2023

RHIC Plan:

Sunday—Monday: Physics


Notable items/recap from past 24 hours:

Day shift: Cosmics

“They have problems with injecting blue ring and need short access”

Evening shift: Cosmics

Storm => “Magnet trip at ~8:25”; “VME crates 63, 77 and 100 tripped…Lost Connection to BBC, VPD and EPD but we believe that this is because they all use BBC LeCroy. Will try to restore connections soon. TPC FEE were off before the storm.”

Owl shift: Cosmics

Persistent “ETOW: Errors in Crate IDs: 1 -- RESTART Run or call expert if the problem persists.” message. Continued after load write and read on individual fee crates and master reload. ETOW seemed to be recording normal data so they kept it in the run.” “Tonko said this issue should be fixed for physics.” — suggested power cycling crate but didn’t know how to do it. Oleg may know how to do it if Will doesn’t respond. Corruption means stale data. Update: the DO from today’s morning shift was able to fix the problem by following the manual’s instructions for power cycling before the load write and read. They think the instructions could be updated to be a bit clearer.

Other items:

Another DO trainee had a health problem and needed to stay home from this owl shift. Will update with any developments. DO trainee from evening shift is back from the hospital resting for a few days. Hopefully will be able to take her DO shift next week as normal. Need to verify their capabilities before they would start as DOs next week.

Jim suggests a “Weather Standdown [w]hen a thunderstorm is reported to be approaching BNL”. Will be implemented.

From this shift: “l2new.c:#2278 Most timed out nodes : EQ3_QTD::qt32d-8 (2000)” ”We were not able to bring back EQ3_QTD, restarted the EQ3 crate multiple times and rebooted the triggers. When I try to start the run after the reboot, error message says Check detector FEEs. Contacted Mike Lisa, he will bring it up at 10 o'clock meeting. Right now we started run without eq3_qtd.” David Tlusty has been contacted about a button not working for restarting the crate (#64). Alternative with network power switches? Not just QTD affected, but entire crate. VME board not coming back up. May need access. Update: now can turn it on in slow controls, but STP2 monitor says it’s off. Akio couldn’t be reached about this, and eq3_qtd remains out.

Alexei made an access for the laser (laser run was taken and drift velocity and other plots look good, but west laser is not working and will require longer access on Wednesday), but DOs have been informed and will pass on that only east camera should be used. Alexei also looked at EQ3: not responding. Will send Hank an email after trying a hard power cycle. Seems to still be on but not communicating.

Primary RHIC issues: power supplies; power dip on Thursday; magnet in ATR line is down. Weather looks better for the next week.

New procedure: “After rebooting eTOF trigger (or rebooting all triggers)[,] in etofin001 console (eTOF computer) command "etof_daq_reset". It should be typed after bash.” This is now written on a sticky note by the ETOF computer and Norbert is contacting Geary about adding it to the ETOF manual.


July 29, 2023

RHIC Plan:

Saturday—Monday: Physics


Notable items/recap from past 24 hours:

Day shift: Cosmics

Tim: “replaced compressor contactor for STGC air handler. Compressor now runs SAT.”

“Only subsystem which is not working now is the laser”

Evening shift: Cosmics

“one of the main magnet @ AGS has tripped and they are going to replace it”

“MCR changed the plan as they have a problem with one of the booster magnet”

“Alexei came around 8:00 pm and he fixed the east side camera, but not the west as he needs an access in order to fix it.” (not during night shift, after Saturday 20:00)

“…event display…shows the cosmic rays but not the laser tracks."

Owl shift: Cosmics

“Laser run at 7:15 AM, the drift velocity plot is empty” (leave it out for now)

Other items:

Related to SGIS trip: Removed Prashanth’s office number from expert call list. JH printed signs now posted in the control room with an instruction of what to do in the case of an alarm. Shift leaders have been briefed on the procedure.

“Noticed that EVB[6] is put back, there is no info about it in the log.” — since it seems to be working, leave it in.

DO trainee from evening shift had medical emergency. Shift crew from this current shift is with her at hospital. For this week, can operate without DO trainee, but she has two DO weeks (Aug 1, Aug 15). Will hopefully get an update on her condition today and plan accordingly.


July 28, 2023 

RHIC Plan:

Friday—Monday: Physics


Notable items/recap from past 24 hours:

Day shift: Mostly smooth physics runs + Cosmics

“EVB1 stopped the run, was taken out for further runs, Jeff was notified.” (Can put it back in the run; was actually a small file building problem)
“Temperature in the DAQ room was high in the morning, experts went to the roof and half-fixed the problem. They need access for longer time. Prashanth brought another portable fan and the temperature is now ok.”

Evening shift: Cosmics

“6:41 pm at flattop; then unexpected beam abort…problem with the power supply”

“magnet trips and the TPC water alarm fires…Few mintues later the Water alarm system fires at the control room…MCR informed us they are a general power issue and there are many systems tripped…slow control systems are down”

Owl shift: No physics

“We tried to bring back all the subsystems over the night.” Ongoing problems: “Laser: No, called Alexei…TOF: No, cannot reset CANBUS need to call Geary, already called Chenliang and Rongrong…MTD: same as TOF…ETOF: No…sTGC: No, air blower problem, Prashanth is aware” (Tim is currently checking on it; will let Prashanth, David know when it’s done)

“MCR is also having multiple issues with bringing back the beam"

Other items:

Thanks to experts (Jim, Oleg, Prashanth, Chengliang, Rongrong, Chris, anyone else I missed) for help during the disastrous night

Clear instructions for shift leaders: call global interlock experts on call list, turn off everything water cooled on platform. Written, and PC (or outgoing SL) talking to each shift leader and walking them through logging in and doing it.

Bring back TOF first (Geary will look at it after this meeting), laser second, …

Experts: if your device is on network power switch, send David email with the information so he can upload list to Drupal


July 27, 2023

RHIC Plan:

Thursday—Monday: Physics


Notable items/recap from past 24 hours:

Day shift: Cosmics

“Run restarted ETOF>100 errors” (multiple times) + “Tried eTOF in pedAsPhys_tcd_only - failed, excluded eTOF”

“Temperature in DAQ room still slightly rising, needs to be monitored.” (as of 9:30: room around 84 F; high for next 3 days: 89, 91, 90). 90+ is danger zone => shutdown

Evening shift: Cosmics + mostly smooth physics running

“I got to stop this run due to a critical message from evb01 of daqReader.cxx line 109 states "Can't stat '/d/mergedFile/SMALLFILE_st_zerobias_adc_24207054_raw_2400013.daq' [No such file or directory]”” (also happened this morning; Jeff is looking into it.)

“When the beam is dumped a pedAsPhys_tcd_only with TOF, MTD, ETOF, 1 M events and HV at standby, and the run to be marked as bad, per Geary request via star-ops list.. If there is no ETOF EVB errs and no trigger deadtime, then the ETOF can be included in the run when the beam is back again.”

Owl shift: Mostly smooth physics running

“The run was stopped due to unexpected beam abort and FST HV problem (error 2).”

ETOF check mentioned above was attempted; not enough time to complete before beam returned.

“itpc 9, RDO2 was masked out"

Other items:

Roof access scheduled for next Wednesday, with no beam, for AC servicing. Prashanth will ask an expert to come look at it before Wednesday (today?) to determine if a half-hour access (at end of this fill, ~ 11:00) is needed or not. [UPDATE: AC techs are going to do a roof access after the fill.] Reflective covers for windows in the assembly hall could also be used.
If it gets too hot might need to do an unscheduled stop.

Longer term: is there any computing that doesn’t need to be done there? Could maybe take some of L4 offline.


July 26, 2023

RHIC Plan:

Today: APEX “Plan A” = 7:00 - 23:00. Affected by power supply failure — decision by 12:00. Thursday—Monday: Physics


Notable items/recap from past 24 hours:

Day shift: Mostly smooth physics runs

“Lost beam around 3:20 PM, and had a bunch of trips on TPC, FST, TOF.”

“The DAQ room temp. kept going up. Prasanth put a blower in the room, but the temperature needs to be monitored.”

Evening shift: No beam

“Only a cosmic run with the field on during the entire shift…A machine issue, namely the power supply failure, is still under investigations”

Owl shift: Cosmics

 “The JEVP server seems to have a problem and stuck at run 24207007” — “Jeff fixed the online plots viewer.”

Other items:

“Controlled access started around 8:40 AM. C-AD electricians went in to reset the fuses on a faulty AC.”


July 25, 2023

Notes from RHIC plan:

• Today: Physics run

• Wed: APEX

• Thu-Mon: Physics runs


Notable items/recap from past 24 hours: 

Day shift: Smooth physics runs before noon + 1 beam for sPHENIX BG test (2 hrs)

• Jeff: Updated production_AuAu_2023 and test_HiLumi_2023 configuration files:

production: increased UPC-JPsi & UPC-JPsi-mon from 50->100hz (nominal rates 100->200)

test_HiLumi: 1. set phnW/E to low rates; 2. removed BHT1-vpd100; 3. remove forward detectors from dimuon trigger; 4. set upc-main to rate of 100hz; 5. set upc-JPsi and UPC-JPsi-mon to ps=1

• Jim: PI-14 Methane alarm (Yellow); switched Methane 6 packs on the gas pad; added Alexei's magic crystals to TPC gas system which help enhance the Laser tracks

• Magnet down (2:00pm)

Evening shift: Smooth physics runs

• Owl shift: Smooth Physics runs

• EEMCHV GUI shows one red (chn 7TA) and two yellow (4S3, 3TD) channels.

 MAPMT FEE GUI is all blue in the small one, and all red in the detailed view.

 However, no apparent problem seen in the online monitoring plots

• EPD PP11 TILE 2345 had low ADC values. reboot EQ3, TRG and DAQ, and took trigger pedestals. issue was fixed

Other items:

• Outgoing PC: Zaochen Ye --> Incoming PC: Isaac Mooney

• Ordered for gas methane 6 packs at beginning of run, but will discuss offline

• Water bottles are empty, get some from other trailer room


July 24, 2023

Notes from RHIC plan:

• Today: Physics run + single beam experiment (for sPHENIX BG test) around noon (~1 hour)


Notable items/recap from past 24 hours: 

Day shift: Smooth physics runs

• BTOW-HT plots have missing channels near triggerpatch ~ 200, Oleg suggested to reboot trigger, rebooted but the problem persists, Hank called and suggested that we powercycle BCE crate, We powercycled BCE crate but the problem persists.

• TOF Gas switched PT1 Freon line B to line A

Evening shift: Smooth physics runs

• Jeff Called in and helped us fix the L4Evp:.

• It was not working because:

1. l4evp was not included in the run. It was not clearing from the "waiting" state because it had been disabled from the run, so when L4 was rebooted it was NOT rebooted. Putting it back in the run fixed this.

2. xinetd is used in the communication between the Jevp and the DAQ server. It was in an inconsistent state, so I restarted xinetd.

Owl shift: Physics runs with a few issues

• Beam dumped around 2:20am due to power dip issue

• Magnet went down, VME crates went down as well

• TPC cathode was unresponsive, powercycle VME create associated with cathode (57) fixed the issue

• LeCroy that goes to BBC/ZDC/upVPD. DOs restarted the LeCroy, and BBC and upVPD got back. ZDC IOC still not good. There were 2 screens running LeCroy. Killed both and started IOCs fixed the issue.

• Back to physics around 5am.

Other items:

• Gene: “Distortions from Abort Gap Cleaning on 2023-07-21”

• MB DAQ rate dropped from 41k to 37k (due to TPC deadtime), now back to 41k

• High-lumi test, next week?


July 23, 2023

Notes from RHIC plan

• Today-Monday: Physics run


Notable items/recap from past 24 hours: 

Day shift: Smooth physics runs

• Empty areas in eTOF digidensity plot, Geary suggests full eTOF LV/FEE power cycle + noise run during 2 hours access.

Evening shift: 3 physics runs + a few issues

• MTD HV trip for BL4,5,6,7 before flattop. DO powercycled HV block 4-7 following the manual and fixed the issue

• Online QA plots were not updating, restarted Jevp server from the terminal from the desktop near window, fixed it

• L4 has an error: l4Cal, l4Evp, L4Disp are not responding, and prevent starting the run. tried reboot L4, but it is not working. Jeff Landgraf helped work on issue. On the meantime, L4 was moved out and restarted the data taking.

• After l4Evp get solved by Jeff, the issue will be finally solved.

• BBQ from L2 Trigger had problem: Most timed out nodes : BBQ (2000). DO could not powercycle it because the GUI was not responding. Jeff powercycled it. DO contacted expert David and he restarted the canbus to fix the GUI

Owl shift: Smooth physics runs when beam is on

• Beam lost twice (2:27-4:00am, 7:25-9:15am)

Other items:

• MB rate drop (from previous normal 4100à current 3700 Hz), Jeff should check on the prescale, affected by UPC trigger? Dead time from TPC?

• Oleg: need to replace a DSM board? Hank: no need to do it. Oleg and Hank will follow up offline.

• BG level at the beginning of run is too high, triggered lots of trips/spike current from different detectors (sTGC, MTD, TOF,eTOF…). Solution: wait for “physics” (not “flattop”) to bring up detectors.

• Geary: to minimize eTOF effects on the data taking for physics runs (rest of eTOF for a while, Geary will talk to eTOF experts to get a solution on this), tem. Solution: leave eTOF out when it has issue and wait for eTOF expert notice to include it into run.


July 22, 2023

Notes from RHIC plan

•Today-Monday: Physics run


Notable items/recap from past 24 hours:

Day shift: Smooth physics runs

•Loss of EPD connection (but did not affect EPD data taking). Later the connection is back.

•TOF gas is close to low, change of gas would be this Sunday. Shifts should pay special attention.

•DAQ room AC stopped working. Experts replaced the problematic unit.

Evening shift: Smooth physics runs

•Alexei came, worked with the TOF gas (isobutane)

Owl shift: Smooth physics runs

Other items:

•A shift leader of July 25 day shift is filled


July 21, 2023

RHIC plan: 

Today-Monday: Physics run


Notable items/recap from past 24 hours:

Day shift: Smooth physics runs

Evening shift: Smooth physics runs

FST: HV alarm (Failure code 2). DO followed procedure of powercycle and fixed it.

mask evb01 out 

DAQ dead time was found 20m later than it should be, shifts need to pay more attention on it.

Owl shift: Smooth physics runs 

Other items:

eTOF operation should not cost any physics run time, Geary share new instructions

Operation at continuous gap (maybe every hour) cleaning, we should have a plan for the data taking during this condition.

A shift leader is missing for the week of July 25

Bill can help a few days and Dan will get a solution today

Run log is not working well

More attention on the deadtime from DAQ

Run log not work well


July 20, 2023

RHIC plan: 

Today-Monday: Physics run

 

Notable items/recap from past 24 hours:

Day shift: Maintenance

Jeff fixed the issue of Run Control GUI by rebooting X server

sTGC gas, re-adjust the pressure

Eleanor performed CosmicRhicClock test run 24200043

Evening shift: No beam due to (sPHENIX TPC laser work + power supply issue) 

Owl shift: Smooth physics runs from 3am 

Other items:

DAQ rate at high-lumi runs ~2-3k Hz, we can reach 5k for MB trigger, Gene want special runs a few minutes (DAQ: 5-4-2-4-5 k), sometime next week.

eTOF operation should not cost any physics run time:

Remove it from run if ETOF has issue, try to run a pedestal test after the beam dumped and before the next fill, if ETOF is running good with the test run then it can be included in next physics run, otherwise keep it out of run.

 

July 19, 2023

RHIC plan: 

Today: Maintenance (7:00-17:00)

Thu-Mon: Physics run

 

Notable items/recap from past 24 hours:

Day shift: Smooth physics runs + Hi-Lumi Test runs (90m)

Slow response/refresh of Run Control GUI, can be improved by moving the GUI window, but not completely solved.

Evening shift: Smooth Physics runs 

Owl shift: Smooth physics runs 

Maintain:

hours are need in the morning from 10:30am, TPC water will be out (TPC fees should be off)

sTGC gas, re-adjust pressure, reducing valve

tour for summer students


July 18, 2023

RHIC plan: 

Today: Physics run

Wed: Maintenance (7:00-17:00)

Thu-Mon: Physics run

 

Notable items/recap from past 24 hours:

Day shift: Smooth physics runs before 11am

Wayne replaced a disk in EEMC-SC

MCR: power supply issue

Jeff: 1. Removed zdc_fast 2. Put zdc_fast rate into the UPC-mb trigger 3. Added contamination protection to UPC-mb 4. updated production ID for UPC-mb; 5. Added monitor trigger for zdc-tof0; 6. added test configurations: CosmicRhicClock & test_HighLumi_2023

Evening shift: Smooth Physics runs since 6:30 pm

Owl shift: Smooth physics runs 

Other items:

remind shifts about eTOF instructions for this year run

Plan for Wed's maintain:

hours are need in the morning from 10:30am, TPC water will be out (TPC fees should be off)

sTGC gas, re-adjust pressure, reducing valve

tour for summer students

 

July 17, 2023

RHIC plan: 

Today: Physics run

 

Notable items/recap from past 24 hours:

Day shift: physics runs

“Error writing file st_X*.daq: No space left on device”. masked out EvB[5]

Evening shift: Physics runs

sTGC cable 4, 27, 28 were dead. DO powercycled LV and fixed the issuE 

eTOF 100% dead. DO powercycled eTOF LV

EVB[24] [0xF118] died/rebooted, After two times, masked EVB[24] out (Once it happen, try reboot it only 1 time, if not work, directly mask it out.)

Owl shift: Smooth physics runs when beam was on

magnet tripped at 3:40am, CAS fixed it, back to normal run after 1 hour (reason of magnet tripped is still not clear)

Other items:

Plan for Wed's maintain:

* hours are need in the morning, TPC water will be out (TPC fees should be off)

* sTGC gas, re-adjust pressure, reducing valve

 

July 16, 2023

RHIC plan: 

Today-Monday: Physics run


Notable items/recap from past 24 hours:

Day shift: 3 physics runs, mostly no beam

Tonko: Reburned failing PROM in iS02-4; Brand new iTPC gain file installed. Should fix issues with S20, row 35; Added code to automatically powercycle TPX RDOs if required

Jeff: L0 software update to make prescale determination (and run log scaler rate logging) to use the proper contamination definition adjusted scaler rate, Jeff will follow up on this issue.

magnet tripped at 1:47pm till the end of this shift (reason of this trip is unclear, need to follow up)

Evening shift: Physics run started at 7pm

BTOW ADC empty entry

eTOF 100% dead

TPX and iTPC both had high deadtime ~ 70%

Owl shift: Smooth physics run except beam dump (2:50-4:45am)

2:35 AM, sTGC gas pentane counter yellow alarm, Prashanth reset counter in sTGC gas system pannel to fix it

MTD gas changed the bottle from Line A to Line B (Operators need to pay closer attention on the gas status)

Other items:

Geary added instruction of ETOF DAQ issue into the ETOF manual

 

July 15, 2023

RHIC plan: 

Today-Monday: Physics run

Now, CAD is working on AC issue, will call STAR when they are ready to deliver beam


Notable items/recap from past 24 hours:

Day shift: Smooth physics runs

ZDC_MB_Fast was tested, need further tunning

Evening shift: Smooth physics run

VME lost communication at 5pm, David reboot main canbus 

sTGC fan temperature is higher than threshold, expert fixed it

Owl shift: Smooth physics run till beam dump

Other items:

eTOF DAQ issue was solved by Norbert, can join the run

 

July 14, 2023

RHIC plan: 

Today: Physics run

~ 1 hour CeC access around noon

Friday-Monday: Physics run

 

Notable items/recap from past 24 hours:

Day shift: no beam

Prashanth changed the sTGC gas.

Evening shift: Physics run

7pm, sTGC gas had an alarm. Expert came over to fix it.

iTPC and TPX high dead ratio issue, problematic RDO of iTPC 18(3), lost ~1 hour

Oleg came over and helped DO to fix the BTOW

Owl shift: Smooth physics run, except 2 hours no beam

Other items:

zdc_mb_fast, Jeff will monitor and stay tunning

eTOF, keep out of run due to it cause high trigger rate

Leaking in control room, from AC, close to eTOF but no harm at this moment, people are working on it. 


July 13, 2023

RHIC plan: 

Today: 2 hours control access, may have beam early afternoon

Friday-Monday: Physics run


Notable items/recap from past 24 hours:

Day shift: APEX

1 EPD ADC was missing since night shift, EPD exert called, solved by powercycling EQ1 and took rhickclodk_clean run. Shift crew should be more careful on the online plots, compare to the reference plots more frequently.

Evening shift: APEX

Jeff added inverse prescale for ZDC_MB_FAST (not tested, if shiftcrew see problems, deadtime~100%, please inform Jeff. Aim for taking data 4k at very beginning of fill, try to get uniform DAQ rate. Jeff will also watch it)

Owl shift: Cosmics

Ingo fixed eTOF DAQ issue


12 July 2023 

RHIC plan:

Today: APEX starting 7:30 am (~16 hours)

Thu - Mon: Physics run

sPHENIX requested no beam for Laser test(5 hours) either on Thu or Fri


Notable items/recap from past 24 hours:

Day shift: no much good beam, pedestal runs, 3 good runs

Evening shift: TRG issue, Beam dump due to power failure, pedestal runs 

TRG experts power-cycled triggers and notes, got the TRG back after 3 hours work

OWL shift: Smooth Physics runs 2:20-6:45 am


3 July 2023


RHIC/STAR Schedule

Running AuAu until maintenance day on Wednesday

 sPHENIX requested 5-6 hours of no beam after the maintenance.

Students from Texas are visiting STAR. It would be good to arrange a STAR tour for them.

Tally: 3.43 B ZDC minbias events.


Summary

· Continue AuAu200 datataking.

· Yesterday morning beam loss after about 20 minutes at flattop. Some FST HV tripped.

· Beam back at flattop around 10:50 but PHYSICS ON declared half an hour after that.

· Smooth datataking after that with a TPC caveat (see below)

· This morning beam loss that will take few hours to bring back.

· 107x107 bunches last couple of days to address the yellow beam problems.

Trigger/DAQ

TPC/iTPC

· Tonko worked on iTPC RDOs. Most have been unmasked.

· At some point the problems with a 100% deadtime started. Restarting run and/or FEEs did not always solve the problem. Tonko was working with the shift crew.

· Three RDOs are down (iTPC). Two may come back after the access.

BEMC

· Two red strips around phi bin 1.2 in run 24184004, normal otherwise

EPD

· West tiles did not show up in one run, but were back again in the next one.

FST

· On-call expert change


Hanseul will take over as a period coordinator starting tomorrow.


2 July 2023

RHIC/STAR Schedule [calendar]

Running AuAu until maintenance day on Wednesday

 sPHENIX requested 5-6 hours of no beam after the maintenance.

Air quality is has substantially improved for today, but this very much depends on the winds, thus may worsen again.

Tally: 3.23 B ZDC minbias events.


Summary

· Continue AuAu200 datataking.

· Beam loss around 17:45, TPC anodes tripped.

· Ran some cosmics until we got beam back around 22:00

· Smooth running after.

· EPD and sTGC computers were moved away from the dripping area.


EPD

 West tiles did not show up in one run, but were back again in the next one.

eTOF

· EVB errors once. Was in and out of runs. Some new empty areas reported.

· ETOF Board 3:16 Current(A) is 3A (normally it is ~2A). Shift crew says there was no alarm. Incident was reported to Geary.

 

1 July 2023

RHIC/STAR Schedule [calendar]

Running AuAu until maintenance day on Wednesday

 sPHENIX requested 5-6 hours of no beam after the maintenance.

AIR QUALITY!!!

AQI is not great but nowhere near the HSSD trip levels. The document is growing, but need more input if it is to become a procedure.

https://docs.google.com/document/d/1-NhZJmS9MjIotvHUd9bPRVwObS-Uo7pWdjML36DjgeI/edit?usp=sharing

Tally: 3.02 B ZDC minbias events.


Summary

· Continue AuAu200 datataking.

· sPHENIX requestion access yesterday morning.

· Tim swapped out the troubled BE005 DSM board with a spare. It was tested and Oleg ran bemc-HT configuration and verified that the problem that BTOW was having is fixed.

· Beam back (after the access) around 13:40.

· Beam loss around 20:40 causing anode trips

· Problems with injection. Beam back around half after midnight

· Very smooth running after that.


Trigger/DAQ

· Jeff made agreed modifications to a zdc_fast trigger and added it back

· Also put DAQ5k mods into the cosmic trigger and improved scaler rate warning color thresholds

TOF/MTD

· Gas switched from A to B.

eTOF

· new module missing.

 

30 June 2023

RHIC/STAR Schedule [calendar]

F: STAR/sPHENX running

 sPHENIX requested 2 hour RA from 9 to 11.

Running until maintenance day on Wednesday

 sPHENIX requested 5-6 hours of no beam after the maintenance.

AIR QUALITY!!!

AQI is not great but nowhere near the HSSD trip levels. The document is growing, but need more input if it is to become a procedure.

https://docs.google.com/document/d/1-NhZJmS9MjIotvHUd9bPRVwObS-Uo7pWdjML36DjgeI/edit?usp=sharing

Tally: 2.86 B ZDC minbias events.


Summary

· Continue AuAu200 datataking.

· Around 12:50 one beam was dumped for the sPHENIX background studies

· 12x12 bunches beam around 16:40. This was to test the blue beam background. MCR was step by step (stepping in radii) kicking the Au 78 away from beam pipe. This resulted in a much cleaner beam and yellow and blue showing the same rates. Now they are confident in the cause of the background but creating the lattice for this problem is a challenge.

· New beam around 2:20


Trigger/DAQ

· BHT3 high rates happened overnight

· Geary was able to remove the stuck TOF trigger bit.

· Tonko suggested leveling at 20 kHz, based on last nights beam and rates/deadtime.

TOF/MTD

· Lost connection to the TOF and ETOF HV GUIs. David suggested that it could be a power supply connection problem. The problem restored itself.

sTGC

· STGC pT2(2) pressure frequent alarms in the evening. SL suggested to change the pressure threshold from 16 psi to 15.5 psi. I do not know if it was changed. David will have a look at it and decide weather to lower the alarm or to increase the pressure a little.

Discussion

· For the moment keep the leveling 13 kHz and discuss the adjustment of triggers during the next trigger board meeting.

· Tim will replace the DSM1 board and Jack will test it.

· During next maintenance day magnet will be brought down to fix the leak in the heat exchanger that occurred after last maintenance.


29 June 2023

RHIC/STAR Schedule

Th: STAR/sPHENX running

F: STAR/sPHENX running

AIR QUALITY!!!

We were warned about air quality index reaching 200 today, which means the HSSD’s will go crazy and therefore fire department would like them off, which means turning the STAR detector off, as we did a couple of weeks ago.

Experts please be ready and please contribute to this document so we have a written procedure in case this happens again.

https://docs.google.com/document/d/1-NhZJmS9MjIotvHUd9bPRVwObS-Uo7pWdjML36DjgeI/edit?usp=sharing

Tally: 2.65 B ZDC minbias events.


Summary

· Continue AuAu200 datataking.

Beam back around 22:10

· Pretty smooth running except stuck TOF bit starting around 2:00. Geary is working on it.


Trigger/DAQ

· Jeff added tcucheck into the logs, so that does not need to be done manually anymore.

TPC/iTPC

· TPC anode trip in sector 11.

· Tonko worked on the problematic RDOs on the outer sectors that were masked in recent days. It seems that some FEEs have problems with DAQ5k, he masked them and RDOs are back to runs.

· Plan for inner RDOs is to take a look today or at the next opportune moment.

eTOF

· One more empty fee

Discussion

· Power cycle MIX crate to try to fix the stuck TOF bit. Shift crew did it, but did not seem to help.

· If the board for the TOF stuck bit problem needs to be replaced we will need an access.

· 8 o’clock run seems to have proper rate.


06/28/2023

RHIC/STAR Schedule

W: APEX 16 hours

 It will most probably be over around 19:00.

Th: STAR/sPHENX running

F: STAR/sPHENX running

Tally: 2.53 B ZDC minbias events.


Summary

· Continue AuAu200 datataking. 45-minute runs. Detectors ON at FLATTOP.

· Beam was extended way beyond its dump time due to the problems with injectors. Dumped around 19:00

· sPHENEX requested short controlled access (30 min) after which beam was back around 20:50

· First run was taken no leveling for tests after this we are running with leveling at 13 kHz.

· There is water dripping in the control room over the sTGC station.


Trigger/DAQ

· Tonko changed DAQ_FCS_n_sigma_hcal threshold from 2 to 5.

TPC/iTPC

· TPC anode sector 13 channel 7 tripped three times.

BEMC

· Overnight high rates of BHT3 and BHT3-L2Gamma.

· Oleg was contacted. Trigger reboot if run restart does not help seems to be helping.

· Oleg: DSM boards need to be replaced otherwise we see it picking up masked trigger pages.

EPD

eTOF

· Geary worked on eTOF and it was included in the runs. It worked without major problems.

· Lost a couple of fees and then the entire module was gone.


06/27/2023

RHIC/STAR Schedule [calendar]

T: STAR/sPHENX running

sPHENIX wants to run some steering tests to the beam will be dumped 2 hours earlier

W: APEX 16 hours

Th: STAR/sPHENX running

F: STAR/sPHENX running

Tally: 2.28 B ZDC minbias events.


Summary

• Continue AuAu200 datataking.

• Beam dumped around 12:45 and we went to controlled access asked by the sPHENIX

• Beam back around 19:00 but lost and then back in about 45 minutes.

• A/C in the control room is fixed.

• We asked MCR to level at 13 kHz zdc rate to take advantage of the DAQ5k. With the new beam we got 4.2 kHz DAQ rate, TPC deadtime around 40%.

• This morning we requested MCR tore remove leveling. Without leveling DAQ rates are ~4.2 kHz. zdc_mb dead times around 51-56%.

• Around 23:00 DAQ monitoring page had some problems but was restored to normal in an hour or so. Perhaps it is related to a single corrupt message which the DAQ monitoring cannot display. It will restore itself.

• There was also an intermittent problem loading the shiftLog page in the evening. 

• Vertex looks well under control.

Trigger

• Jeff made bunch of changes to the trigger setup as agreed at the trigger board meeting. Some low rate triggers were implemented (~ 2Hz and ~50Hz).

TPC/iTPC

• Alexei checked the laser system during the access.

• Couple of additional RDOs could not be recovered and were masked out.

• Tonko will look at the masked RDO status tomorrow during the APEX.

BEMC

• Oleg has masked out Crate 0x0F.

• Tonko suppressed BTOW CAUTION message for Crate 4, Board 4.

• The high DHT3 trigger rate showed up but was resolved by restarting the run.

eTOF

• Geary worked on eTOF. It was briefly included in the runs, but the problems persisted. So, it is out again.


In progress / to do

• Increasing run duration.

o Currently we are running 30-minute runs.

o Perhaps we can increase the run duration to 45 minutes?

o AGREED: switch to 45 minute long runs.

• Bringing detectors up at flattop.

o Currently detectors are brought up after PHYSICS ON is declared.

If experts agree that the beams at FLATTOP are stable enough to bring up detectors, we could opt for this.

o AGREED: to bring up detectors at FLATTOP.


Discussion

• Tonko mentioned that sometimes FCS04 is starting to record data at a very high rate that causing deadtime. Perhaps a better adc (nSigma) cut should be applied to remove the noise, which it most likely is at those high data rates.

 

06/26/2023

RHIC/STAR Schedule

T: STAR/sPHENIX commissioning

sPHENIX will need 4 hour access today. Time TBD around 10:30.

Tally: 2.12 B ZDC minbias events.


Summary

• Continue AuAu200 datataking.

• Fills around 10:00, 18:00, and 4:40 this morning.

• Many eTOF EVB errors. Much more than usual.

• Many BHT3 high trigger rate issues.

• Temperature in the control room was in low 80s and could not be adjusted using thermostat. The fan is blowing constantly because thermostat is set to low but there air it blow is not cold.

• MCR is periodically correcting the vertex position.

• They are monitoring it and will be triggering correction at 10 cm. They also said they are working on automated procedure of vertex correction.


TPC/iTPC

• Tonko updated sectors 1-12 (both inner and outer) to DAQ5k.

• TPX RDOs S11-5 and S08-6 masked as Tonko sees some problem with them.

• ITPC: RDO S24:1 masked later (FEE PROM problem)

• iTPC RDO S18:3 early this morning

• Gas alarm briefly chirped twice this morning.

• This morning Tonko finished updating the entire TPC to DAQ5k

• 24177033 first run with DAQ5k configuration

BEMC

• A lot of BHT3 high rate trigger issues

• Oleg masked out BTOW TP 192, 193 and 159 from trigger.

• Issue with high rate of triggers still persisted.

• Oleg: some crates lose configuration mid-run. Symptoms similar to radiation damage, which is strange with the AuAu beam.

• Constant power cycling of BTOW power supply should not be used so often.

• Oleg will mask the problematic boards to eliminate the problem.

eTOF

• Many EVB errors. eTOF was mostly out of runs overnight and this morning.

• After many attempts to fix and bring back to runs it was decided to keep it out.


Discussion

• J.H will let CAD know that we would like to level ZDC rate at 13 kHz to accommodate DAQ5k rates.

 

06/25/2023

RHIC/STAR Schedule [calendar]

Su: STAR/sPHENIX commissioning

Tally: 2.01 B ZDC minbias events.


Summary

• Continue AuAu200 datataking.

• Shift leaders were in contact with MCR to have z vertex steered back to center

• Smooth running otherwise.

• MCR was checking on their injectors this morning.


Trigger

• Jeff moved triggers to the recovered bits UPC-JPSI-NS slot 9->15, UPC-MB slot 14->31, fcsJPSI slot 12->34

TPC/iTPC

• jevp plots updated and show the missing RDO data in sectors 4, 5

• PT1 and PT2 alarm threshold lowered to 15.5 PSI, alarms sounded when they dropped below 16 PSI.

• With the new fill around 18:00 shift crew notices higher deadtime and lower rates (1.8 kHz). Tonko was able to fix the problem by power-cycling TPX Sector 8 FEEs, which seems to have be causing this issue.

• Tonko continued working on updating sectors.

• TPC parameters used by the HLT using drift velocity were just changed. This should properly account for the changing drift velocity to reconstruct the z vertex

BEMC

• Issue with BHT3 trigger firing with very high rate reappeared. Oleg was contacted and suggested to power cycle BEMC PS 12 ST when simple run restart does not help.

FST

• Settings/configuration reverted back to pre-timebin-9-diognosis setup. 


Discussion 

in case of dew point alarm contact Prashanth


06/24/2023

RHIC/STAR Schedule

Sat: STAR/sPHENIX commissioning

Su: STAR/sPHENIX commissioning

Tally: 1.89 B ZDC minbias events.


Summary

• Continue AuAu200 datataking.

• MCR Computer at the SL desk pops a message about needing to update something.

• We had about 2 hours with just one beam circulating as requested by the sPHENIX

• Z vertex is drifting away during the fill

• Unexpected beam dump around 1am. TPC anodes tripped.

• Took cosmic data until beam returned around 6:40 this morning.

• LV1 crate lost communication which caused FCS and sTGC alarms. Back after quick recovery.

• Smooth running since.


Trigger

• Jeff worked on trigger configuration

• Set pre/post = 1 for fcsJPsi, UPC-mb, UPC-Jpsi-NS triggers. (Bits 9,12,14). In order to debug issue with lastdsm data not matching trigger requirements.

• Jeff also changed the scalers that we send to CAD, which had been zdc-mb-fst and not it is changed back to zdc-mb.

• This morning Jeff moved these bits again to the slots that were previously considered “bad” and proved to be usable.

TPC/iTPC

• Methane gas has been delivered.

• Tonko checked problematic RDOs in iTPC sectors 3, 4, 5. The problem is now fixed and needs the jevp code to pick up the changes and be recompiled.

• Drift velocity continues to go down but shows signs plateauing.

TOF/MTD

• TOF gas bottle switched from B to A - 14:20

• TOF LV needed to be power cycled

FST

• Some progress update was distributed by email and experts will discuss it to make conclusion.

• Inclination seems to be switched the time bin back

• The switch will happen at the end of the current fill.


06/23/2023

RHIC/STAR Schedule

F: STAR/sPHENIX commissioning

Sat: STAR/sPHENIX commissioning

Su: STAR/sPHENIX commissioning

Tally: 1.79 B ZDC minbias events.


Summary

· From the 9 o’clock coordination meeting

o CAD has a plan to go back to the blue background issue and try to eliminate it.

o They will also work on tuning the beam to get our vertex centered.

o sPHENIX requested an hour long tests with single beam configuration (one hour for each). At the end of the fill one beam will be dumped and another one at the end of the next fill.

· Yesterday beam back around 13:15 after a short access that we requested.

· sPHENIX requested a short access around 17:00

· Beam back around 18:30 but without sPHENIX crossing angle. It was put in around 19:30 and that seemingly improved our background

· Smooth running after that.

· This morning PSE&G did some work. There was just a split second light flicker in the control room, but nothing else was affected.

Trigger

· Jeff updated MTD-VPD-TACdiff window: MTD-VPD-TACDIF_min 1024->1026. The TACDIF_Max stays the same at 1089DAQ

TPC/iTPC

· About 11 days of methane gas supply is available.

· Expectation to deliver 2 six-packs today.

· Drift velocity continues to decline

BEMC

· Oleg took new pedestals for the BEMC and noise problem has vanished. Must have had bad pedestals.

EPD

· Tim used access time to check on EPD problem.

· The East TUFF box CAT5 cable was disconnected. After reconnecting it everything seem back to normal.

FST

· Gene: FST crushes the reconstruction chain so it is out until fixed

Discussion

 Jeff: added monitoring to trigger bits and noticed that some triggers are not behaving as expected. There are some slots marked “bad” that could be used for the newly noticed “corrupted” triggers after checking that they are actually “bad” or not.


06/22/2023

RHIC/STAR Schedule

Th: STAR/sPHENIX commissioning

12 x 12 bump test @ 8:00

F: STAR/sPHENIX commissioning

About 1.69 B ZDC minbias events collected.


Summary

• Magnet was down for the cooling maintenance (heat exchange cleaning)

• Maintenance team was not able to wrap up early, so we kept magnet down overnight.

• Took zero field cosmics during the RHIC maintenance day.

• Beam back around 1:00 am with 56 x 56 bunches.

• We took data with production_AuAu_ZeroField_2023 configuration.

• Gene reported the DEV environment on the online machines to be back to normal operations. Problems are reported to be gone.


Trigger

• Tonko corrected the deadtime setting. Now it is set to the requested 720. This fixed the FST problems seen in the beginning of this fill.

TPC/iTPC

• About 12 days of methane gas supply is available. Suppliers are being pressed to deliver more ASAP.

• Tonko worked on moving more sectors to DAQ5k configuration. Came across problems with sector 6.

• iTPC iS06-1 masked

• Some empty areas in sectors 4,5,6

• Tonko will look once the beam is back. The cluster seem to be there but not seen on the plots (sec. 4 and 5)

BEMC

Oleg asked to power cycle crate 60 to address noise issues in BEMC. It did not help. Access is needed to attempt to fi this issue. The problem seems to have started on Saturday. Only few minutes access needed to the platform.

It was suggested to power cycle DSM as an initial measure to see if it helps, but this problem might also be coupled with the EPD problem we are seeing.

EPD

• EPD ADC east empty, EPD ADC west has limited number of entries.

• Experts are looking into this problem. It may be due to problem in QA plot making.

• Some sections were also reported to have problems.

• Might be the problem with the FEE.

• To check this issue access will be needed as well – up to an hour.

FST

• FST experts made changes for the time-bin diagnostics.

• It was having problems in the beginning of the fill but was settled after Tonko corrected the deadtime settings.

• Experts are looking at the data after the change.

• The timebin distribution might be indicating an out of time trigger presence. Jeff will also investigate this.


06/21/2023

RHIC/STAR Schedule

W: maintenance day: 7:00 – 20:00

sPHENIX TPC commissioning 5 hours after maintenance – no beam

Th: STAR/sPHENIX commissioning

12 x 12 bump test @ 8:00

F: STAR/sPHENIX commissioning

 

Summary

• AuAu 200 GeV continues.

• Around 11:00 sPHENIX asked for a one hour access. Took a few cosmic runs.

• Beam back around 12:45 with 50 x 50 bunches

• 111 x 111 bunch beam around 19:45, although the MCR monitor showed 110 x 111

• About 1.69 B ZDC minbias events collected.

• Dumped this morning around 6:30. Prepared for magnet ramp and brought the magnet down (and disabled). Around 7:00 David Chan confirmed that magnet was down and said they work on heat exchanger cleaning will start and we will be kept updated throughout the day.

• Depending how it goes we may or may not keep magnet down overnight.

Trigger

Jeff made some changes to the production trigger and L0 code

DAQ

• BHT3 trigger high-rate issue that causes deadtime has reappeared yesterday. Run restart did not help and neither all the other superstitious attempts. Coincidently beam was dumped and refilled around that time. Once we came back with a new beam the problem was gone.

• Oleg: looked and saw no error messages when this is happening. If it happens again suggestion is to power cycle the LV of this crate [4 crates affected by power cycle].

TPC/iTPC

• Needed some attention time to time (power cycling FEEs).

• Multiple peaks in drift velocity in a couple of laser runs (not all)

• Drift velocity keeps falling after the gas change

• Tonko will update about 6 sectors probably once beam is back 

TOF/MTD

EEMC

• Brian noted that EEMC tube base 7TA5 seems dead and can be masked

eTOF

• DAQ restarted and kept out for one run because of additional empty strip (13) noticed by the shift crew.

FST

• Time bin diagnostics plan? Doing the time bin change diagnosis in parallel of the offline analysis might be prudent.

• Ziyue will distribute the summary of the plan for this 9 time bin diagnosis.

• Jeff: there has to be changes made in trigger setup associated to the FST time bin change for us to run properly. 

 

Discussion 

• Zhangbu: MCR were using the ZDC rate without killer bit for their beam tuning. It seems now they are using the right rate (with killer bit). We might require to redo the vernier scan.

• Maria: EPD QA monitoring plots are lost since day 166. Akio had the same problem. Gene had been working on the DEV environment on online machines. There is some improvement but automatic running of jobs are failing.

 

06/20/2023

RHIC/STAR Schedule

T: STAR/sPHENIX

W: Maintenance day : 7 :00 – 20 :00

 sPHENIX TPC commissioning 5 hours after maintenance – no beam

Th: STAR/sPHENIX commissioning

 12 x 12 bump test @ 8:00

F: STAR/sPHENIX commissioning


Summary [last 24 hrs]

· AuAu 200 GeV continues.

· Over 1.56 B ZDC minbias events collected thus far.

· Beam extended past the scheduled dump time due to the issues at CAD. Unexpected beam dump around 2:20 this morning. Back around 6:50 and a quick loss. Back for physics around 7:30 again. Running since.


DAQ

· Yesterday afternoon: TPC showing 100%. Power cycling TPC fees did not help. Many things were tried, but only after PefAsPhys it was fixed, although the culprit was not clear to the crew. Problem was caused by BHT3. It was firing at a very high rate. If this happens restarting the run should fix the issue, if not call to Oleg should help.


TPC/iTPC

· Tonko: updated TPX 3 and 4 updated – ongoing process. Waiting for Jeff to discuss a couple of ideas about token issues in iTPC. iTPC 2 sectors updated so far.


FST

· From the discussion at the FST meeting: Test setting 9 time bin running for diagnostics. To test timing shift. This will slow down the datataking.

· Experts will discuss it further to come up with the action plan for this test.

· Tonko: the plan is to split forward triggers in DAQ5k. So after that slow FST will only affect forward triggers and thus less of a problem. Perhaps it is a good idea to wait for that to happen before these test.


Discussion

· Alexei: changed the gas. Old one was affecting the drift velocity because of a contamination. This change should stabilize changed the gas drift velocity. It has already started to drop.

 

06/19/2023

(Weather: 59-76F, humidity: 74%, air quality 22)


§ RHIC Schedule

This week transverse stochastic cooling (one plane each for both blue and yellow).

toward 2x10^9 per bunch, 56x56 will be regular. 

Physics this week,

· 56x56 nominal store yesterday.

· 111x111 store since last night 10:30pm.


§ STAR status

· Full field: zdc_mb = 1.45B, 280 hours of running.

· DAQ5k tested two sectors, ran at 5.2 kHz with 37% deadtime. See star-ops email from Tonko for details. Tonko: we should produce the FastOffline for this run, 24170017, to analyze the output.

Gene: /star/data09/reco/production_AuAu_2023/ReversedFullField/dev/2023/170/24170017


§ Plans

· Continue to take data thru the long weekend.

· Tonko, slowly ramp up the DAQ5k next week, 1hour/day ~ each day.

· FastOffline production for DAQ5k test runs.

· Reminder:

1) Trigger-board meeting tomorrow at 11:30am, see Akio’s email. To discuss trigger bandwidth.

2) RHIC scheduling meeting at 9:30am (was 3pm Monday).

3) Irakli will be Period Coordinator starting tomorrow, running 10am meeting. I will be giving the STAR update for the Time meeting at 1:30pm.


06/18/2023

(Weather: 59-78F, humidity: 66%, air quality 72)


§ RHIC Schedule

This week transverse stochastic cooling (one plane each for both blue and yellow).

toward 2x10^9 per bunch, 56x56 will be regular. 

Physics this week,

· 56x56 nominal store until Tuesday.


§ STAR status

· Full field: zdc_mb = 1.29B, 259 hours of running (+120M events since yesterday 2pm)

· Half field: zdc_mb = 247M, 38 hours of running.

· 500A field: zdc_mb = 68M, 11 hours of running

· Zero field: zdc_mb = 168M, 30 hours of running

· Smooth running and data taking since 2pm yesterday. Magnet, PS, cooling, all worked.

· Carl: lowered TOFmult5 threshold from 100 to 20 for the FCS monitoring trigger.

· GMT gas bottle switched. Shift crew should silence the alarm for the empty bottle.


§ Plans

· Continue to take data thru the long weekend.


06/17/2023

(Weather: 59-76F, humidity: 86%, air quality 29)


§ RHIC Schedule

This week transverse stochastic cooling (one plane each for both blue and yellow).

toward 2x10^9 per bunch, 56x56 will be regular. 

Physics this week,

· 56x56 nominal store until Tuesday.


§ STAR status

· Full field: zdc_mb = 1.17B, 241 hours of running.

· Half field: zdc_mb = 247M, 38 hours of running.

· 500A field: zdc_mb = 68M, 11 hours of running

· Zero field: zdc_mb = 168M, 30 hours of running

· STAR magnet is down, and we are doing PS cooling system work (heat exchanger cleaning)

Many junks accumulated on the tower side, while the PS side is clean as expected.

· Blue beam background seems to be only a factor of 5 higher than yellow.

· Shift overlap issue: Evening shift DO trainee  owl shift DO. My proposal is to dismiss him early to be prepared for owl shift. Carl: ask him not to come in for evening shift.

· David: MCW temperature changed from 67F to 65F. David proposes to put it to 63F, given the dew point ~ 51-54F. Prashanth will set it to 63F.


06/16/2023

(Weather: 58-79F, humidity: 61%, air quality 28)


§ RHIC Schedule

This week transverse stochastic cooling (one plane each for both blue and yellow).

toward 2x10^9 per bunch, 56x56 will be regular. 

Physics this week,

· Today will be 6x6 from now to ~1pm, and 12x12 in the afternoon.

· 111x111 nominal store starting this evening until Tuesday.


§ STAR status

· Full field: zdc_mb = 1.08B, 226 hours of running.

· Half field: zdc_mb = 247M, 38 hours of running.

· 500A field: zdc_mb = 68M, 11 hours of running

· Zero field: zdc_mb = 160M, 28 hours of running

· STAR magnet is at full field!

· TOF: pressure alarm from Freon, shift crew missed it.

· Tonko: DAQ5K, some tests were interrupted due to the magnet ramping.

· Blue beam background: now it seems the mystery is understood but not yet confirmed:

- Au78 is the source of the background. CAD did some calculations (can remain in RHIC for ~ 3 turns?, big spikes on Q3 magnet)

- 2016 didn’t have it because we had the “pre fire protection bump”.

JH: CAD will come up with a new lattice or plan to remove the background.

 

§ Plans

· Ready to take data!!!

· Tonko will finish the tests that were left unfinished.

· David: VME crates temperature sensor, what should we do with the alarm?

· FST: no more adjustment until next Tuesday.

· Lijuan: talked with David Chan, preparation work, e.g., chiller, heat exchanger, cooling system, etc. should be done during the shutdown and well in advance before the run.

Communication with the support group should be done thru 1 person, e.g., Prashanth, instead of thru multiple people and potentially cause miscommunications.


06/15/2023

(Weather: 58-77F, humidity: 67%, air quality 29)


§ RHIC Schedule

This week transverse stochastic cooling (one plane each for both blue and yellow).

toward 2x10^9 per bunch, 56x56 will be regular. 

Physics this week,

· Thursday: PSEGLI work at Booster cancelled. Moved to next Wednesday.

12x12 bunches 6:00-13:00, no beam 13:00-18:00.

· Physics for the rest of the week.


§ STAR status

· Full field: zdc_mb = 1.08B, 226 hours of running.

· Half field: zdc_mb = 247M, 38 hours of running.

· Zero field: zdc_mb = 159M, 28 hours of running

· STAR magnet tripped due to the water supply issue. A few SCR fuses blown. CAS is still working on it. The current estimate is it can be back online this afternoon.

· Tonko: DAQ5K will be tested with real data, zero or half field.


§ Plans

· Magnet will be ramped up from half to full field in small steps.

· FST: APB timing, experts will look into it.

· FST running with DAQ5K. Jeff provided possible trigger setups for PWG to choose from, Carl made some suggestions. Jeff provided codes to Gene for the FastOffline production.


06/14/2023

(Weather: 60-74F, humidity: 77%, air quality 28)


§ RHIC Schedule

This week transverse stochastic cooling (one plane each for both blue and yellow).

toward 2x10^9 per bunch, 56x56 will be regular. 

Physics this week,

· Wednesday APEX. (07:00-17:00) Overnight Physics.

· Thursday: PSEGLI work at Booster for 12-16 hours. Only one store during the day, if STAR has magnet.

12X12 bunches for morning, no beam for the afternoon.

· Physics for the rest of the week.


§ STAR status

· Full field: zdc_mb = 1.08B, 226 hours of running.

· Half field: zdc_mb = 247M, 38 hours of running.

· Zero field: zdc_mb = 124M, 21 hours of running

· STAR chiller is still being fixed. See Prashanth’s photos.

· David rebooted the main Canbus, the VME crate issues resolved.

· Tonko did some DAQ tests during the morning shift, including Elke’s request for sTGC. See shift log for details.

· Tonko: Data format is different for the DAQ5k, and online-found clusters are there but not the ADC plot.

· Shift Crew reported that the online QA plot doesn’t have many entries for laser runs, where the events did not get “abort”. JEVP plot issue? Alexi: need to train the DO to tune lasers better.

· Zhen Wang had some issues recovering daq files from HPSS, should contact star-ops (expert: Jeff). Ziyue had similar issues (FST).

· Shift: one DO trainee came to shift for all day without taking RHIC Collider Training.

This is not acceptable, and each institute council representative needs to be responsible!

One possible solution is that: Period Coordinator checks ALL shift crew’s status online each week, e.g., Friday.


§ Plans

· Shift: Email reminder to the entire Collaboration. Bill: talk to CAD about training/schedule.

· Elke: some updates are needed on sTGC. Elke will send it to star-ops.[1]

· DAQ5k hope to be working before next week…

· sTGC group needs to come up with a plan. QA team needs to look into forward detectors.

· FST: APB timing, experts will look into it.

· FST running with DAQ5K. How to make the trigger? FST limit is at 3k. (prescale for the time being). Also follow up with PAC, PWGC, and trigger board.

Jeff will provide possible trigger setup for PWG to choose from.

 

[1] summary from todays sTGC meeting.

So Tonko uploaded the correct software to the one RDO which was replaced before the RUN, this definitely improves the time bin plot on page 144 for the online plots.

Based on the recent runs we will keep the time window at -200 to 600 so we do not cut in the distribution and also if the luminosity goes up we will need it.

The Multiplicity plot has not improved yet, first becuase the online plots have a cut on it so can we please remove the time-window cut on the multiplicity plot, page 142.

But of course one still needs to check the multiplicity plots per trigger, to explain the shape offline.

Additional observations, page 139 plane 4 quadrant C VMM 10 to 12 are hot, this most likely is FOB 87 which looks strange on page 148.

Should we disable it or live with it, or can we wiggle a cable during an access.


06/13/2023

(Weather: 63-77F, humidity: 74%, air quality 28)


§ RHIC Schedule

This week transverse stochastic cooling (one plane each for both blue and yellow).

toward 2x10^9 per bunch, 56x56 will be regular. 

Currently 111x111 bunches, started the store from yesterday.

12x12 bunches after this store for sPHENIX.

Physics this week,

· Tuesday: 100 Hz leveling at sPHENIX. ~ No leveling at STAR.

· Wednesday APEX.

· Physics for the rest of the week.


§ STAR status

· Full field: zdc_mb = 1.08B.

· Half field: zdc_mb = 235M, 34 hours of running.

· Shift changeover went smoothly.

· STAR chiller is being installed now.

· VME crate 77: Tim went in yesterday during the access and checked the voltage on those crates. They were fine. Issues are the Slow Control or monitoring?

David: Reboot the main Canbus.

· Tonko did some DAQ tests.

· FST running with DAQ5K. How to make the trigger? FST limit is at 3k. (prescale for the time being). Also follow up with PAC, PWGC, and trigger board.

Elke: we should think of which trigger needs FST first, e.g., how much data needed.


§ Plans

· For the VME crate 77, David is going to reboot the main Canbus today.

· sTGC group needs to come up with a plan. QA team needs to look into forward detectors.

Tonko suggests: look at some low event activity events, e.g., upc triggers.

FST: APB timing, experts will look into it.


06/12/2023

(Weather: 65-74F, humidity: 79%, air quality 61)


§ RHIC Schedule

This week transverse stochastic cooling (one plane each for both blue and yellow).

toward 2x10^9 per bunch, 56x56 will be regular. 

After the current store (dump time @ 12pm), it will be 111x111 for one store until 9pm.

· Controlled access 45mins after this store.

· Machine testing next store.

Physics this week,

· Mon: 1kHz, Tu: 3kHz, leveling at sPHENIX, but normal rate at STAR.

· Wednesday APEX.

Monday 3-5pm, tour at STAR control room, guided by Lijuan and Jamie.


§ STAR status

· Full field: zdc_mb = 1.08B.

· Half field: zdc_mb = 99M, 15 hours of running.

· TOF issue resolved. NW THUB is now running on the external clock.

· Magnet tripped again when ramping up at midnight. Outdoor temperature was ~65F.

· STAR chiller ready on Tuesday. JH: first thing in the morning, confirmed, a few hours expected. Tonko: use this time to run tests on the TPC with zero field.

· Many “Didn’t build token because of ..abort” error messages. Remind the shift crew for next week. Jeff will take this caution message out.

· VME crate 77 (BBQ) LV PS seems to have problems. Akio looked thru the QA plots and found nothing is wrong. Trigger group should investigate it, and Tim can be ready around 9am to go in, if we request controlled access.

· Jamie mentioned the drift velocity isn’t great? [1] (run 24163024), HLT people look into it. Tonko: could be half field effect?


§ Plans

· Hank will look at the problem of crate 77 (BBQ) LV ps, and Tim will go in during the Control Access.

· Diyu will grab new drift velocity from this year.

· Tonko: going to test the DAQ5K, mask RDO 6, Sector 1 in the code. DON’T mask it in Run control.

· Jeff will update ALL the trigger ids after the fix of TOF issue.

· sTGC group needs to come up with a plan. QA team needs to look into forward detectors.


06/11/2023

(Weather: 60-78F, humidity: 73%, air quality 58)


§ RHIC Schedule

This week transverse stochastic cooling (one plane each for both blue and yellow).

toward 2x10^9 per bunch, 56x56 will be regular. 

Physics all week until Monday, however,

· No beam: Fri-Sun, 9pm-2am

· Next week, Mon: 1kHz, Tu: 3kHz, Wed: 5kHz leveling at sPHENIX, but normal rate at STAR.

Monday 3-5pm, tour at STAR control room, guided by Lijuan and Jamie.

06/14 APEX.


§ STAR status

· zdc_mb = 1.08B, 226 hours of running time. (~+90M since yesterday)

· Three magnet tripped over the last ~16 hours!

· STAR chiller ready on Tuesday.


§ Plans

· Will be running half-field now.

· TOF: change or swap a cable to a different port. Tim can go in Sunday night 9pm-2am during no beam downtime. Geary will be monitor/check.

Tim: check NW THUB if it is on local clock mode.

· David: if half-field running, will look into the alarm handler.

· sTGC group needs to come up with a plan. QA team needs to look into forward detectors. 


06/10/2023

(Weather: 54-75F, humidity: 69%, air quality 20)


§ RHIC Schedule

This week transverse stochastic cooling (one plane each for both blue and yellow).

toward 2x10^9 per bunch, 56x56 will be regular. 

Physics all week until Monday, however,

· No beam: Fri-Sun, 9pm-2am

· Next week, Mon: 1kHz, Tu: 3kHz, Wed: 5kHz leveling at sPHENIX, but normal rate at STAR.

Monday 3-5pm, tour at STAR control room, guided by Lijuan and Jamie.

06/14 APEX.


§ STAR status

· zdc_mb = 994M, 212 hours of running time. (~+60M since yesterday)

· Vernier scan finally happened last night. (background seems to be different when vernier scan happened at IP8)

· TOF investigation. Tim went in to move the NW-THUB TCD cable to a spare fanout port. Problem persists.

· RHIC seems to have problems inject yesterday, and 9am just lost the beam.

· STAR magnet chiller status: Tuesday will be ready.

· sTGC timing is off. RDO changed, did Tonko look into this?


§ Plans

· TOF: change or swap a cable to a different port. Tim can go in Sunday night 9pm-2am during no beam downtime. Geary will be monitor/check.

· sTGC group needs to come up with a plan. QA team needs to look into forward detectors.


06/09/2023

(Weather: 53-70F, humidity: 71%, air quality 59)


§ RHIC Schedule

HSSDs enabled in STAR Thursday, and resumed operation.

This week transverse stochastic cooling (one plane each for both blue and yellow).

toward 2x10^9 per bunch, 56x56 will be regular. 

Physics all week until Monday, however,

· Today: sPHENIX requests 20 mins access after this store.  first 6x6 bunches for MVTX.  vernier scan with 56x56 without crossing angle.

· No beam: Fri-Sun, 9pm-2am

· Next week, Mon: 1kHz, Tu: 3kHz, Wed: 5kHz leveling at sPHENIX, but normal rate at STAR.

Monday 3-5pm, tour at STAR control room, guided by Lijuan and Jamie.

06/14 APEX.


§ STAR status

· STAR is back on running. zdc_mb = 933M, 202 hours of running time. (~10% of goal)

· Yesterday, first fill was 6x6 bunches and 56x56 afterwards.

· We followed procedure of turning all systems back on, with help of experts. Everything was brought back within 1h 5mins, except TPC. Total was about 3 hours. TPC cathodes power supply (Glassman) and two control/monitor cards (4116 and 3122) were replaced. Alexei: contacted sPHENIX (Tom Hemmick), and need to build a spare for the HV system for cathode. David: buy a new power supply, but Tom also has some spares in the lab.

· TOF: Since beginning of Run 23, ¼ of TOF was lost, only ¾ of TOF work (?). Not sure what the cause is. Offline QA should look at TOF trays. Bunch IDs were not right, and data was not right. More investigations are needed.

· UPC-jet triggers rates were much higher after STAR restarted, regardless ETOW had problems or not. Other triggers, please also pay attention to the difference if any. (W. Jacobs just fixed/masked one of the trouble bits, rates seem ok)

· DAQ: Event abort errors happened a few times. Look out online QA plots to see if they are empty. Jeff will remove that caution message.

 

§ Plans

· TOF experts should provide instructions to star-ops and/or offline QA team.

· We need to update the procedure after the Power dip to bring back STAR (2021 version missed EEMC, all forward detectors, MTD, RICH scaler). Experts should provide short instructions.

· Reference plots are more or less updated. Subsystems that did not respond/provide feedbacks are: sTGC, EPD. (These experts were busy the past few days in the control room). https://drupal.star.bnl.gov/STAR/content/reference-plots-and-instructions-shift-crew-current-official-version


06/08/2023

(Weather: 48-70F, humidity: 64%, air quality 162)


§ RHIC Schedule

This week stochastic cooling transverse.

toward 2x10^9 per bunch, 56x56 will be regular. 

Physics all week until Monday but NOT at STAR until further notice.

and 06/14 APEX


§ STAR status

· STAR at full field; Field on to ensure RHIC running.

· No physics was taken after access Wednesday. STAR is shut down due to the poor air quality.


Lab decided to turn off HSSDs lab wide -> No HSSD in STAR -> No STAR running.

Details:

The reason to shut down STAR is because they needed to turn off HSSD (high-sensitive-smoke-detector). They worry the air quality would get worse, and all the HSSD might go off, and the fire department would not know what to do and if there is a real fire. Since HSSD is within our safety envelope for operation, we cannot operate STAR if we turn off the HSSD. (sPHENIX is different, so they have been running)

· Since last night, 2-person gas-watch shift started. See Kong’s email on star-ops.

§ Plans

· MCR just called to ask us to prepare to ramp up! (09:58am)

· We need to come up with a procedure to shut down STAR safely and quickly. (Note: The process to shut down STAR yesterday was not as smooth as expected. Clearly, we do not do this every day.)

· We can use the procedure after the Power dip to bring back STAR.

· Jeff needs time to investigate DAQ.


06/07/2023

(Weather: 51-73F, humidity: 63%)


§ RHIC Schedule

This week stochastic cooling transverse.

VDM scan Wednesday after access (postponed from yesterday)

no cooling and no crossing angle (1h for physics), then add the angle back.

toward 2x10^9 per bunch, 56x56 will be regular. 

Access today (07:00-18:00), then physics;

and 06/14 APEX


§ STAR status

· STAR at full field;

· zdc_mb = 854M over 190 hours; (~104M+ since yesterday)

· MCW work is being done right now.

· STAR chiller for magnet update. Parts are here, the work will be finished today, but won’t switch over. The switch over does NOT need access.

· Blue Beam Background:

Akio: performed BBC test yesterday and confirmed the blue beam background. Run number: 24157039 was taken with bbcBackgroundTest. (Offline analysis on the background events would be helpful, but not easy without modifying the vertex reco code.)

During the 5 mins store yesterday, supposed to be the Vernier scan, background was still present without crossing angle.

· Akio instructed the shift crew to perform a localClock and rhicClock test to understand the rate jump issue. Changed DetectorReadinessChecklist [1]

Jeff: run “setRHICClock” after cosmic runs, which is already updated in DetectorReadinessChecklist.

· One daughter card on EQ3, will be done by Christian.

· Overnight shift observed a few blue tiles in EPD adc. Experts? Mike: two SiPMs died, two are the database issue. Will make a note to the shift crew. (Mike: Going in today to look at the tiles)

· asymmetric vertex distribution for satellite bunch, but not the main peak. 

· pedestals

L2 waits for 2 minutes before stop run;

MXQ rms>50, very large; take another pedestal right after the fill;

EQ1,2,3,4 pedestals; mean>200; check daughter card (will be discussed at the Trigger Meeting today).

 

§ Plans

· Update the DetectorReadinessChecklist for Vernier scan. (a copy of the production config. Bring up detectors at flattop, don’t stop the run regardless of detector conditions.)

· MCW fixes for the electronics, 9am Wednesday, 3 hours expected. But likely needs longer.

for the MCW fix: TOF LV needs to be off and the full list of subsystems will be sent on star-ops by Prashanth. (DONE)

· TCU bits; Jeff/trigger plan for Wednesday down time with delay tests;

Jeff: will take 4-5 runs and 1h after the water work is done.

· ETOW Crate #4 (W. Jaccob/Tim) on Wednesday (Tim is working on the fix now)

· Spare QTD tests; Chris continues to work on it;

· DAQ5K, outer sectors; Tonko will do this on Thursday with beam.

Tonko: mask RDO6 sector 1, and perform tests.

· After water work is done, who needs to be called. Email star-ops first, and make a call list.

· Passwords update (Wayne Betts)

· Reference plots for online shift; experts of subsystems provide reference for a good run.

FST: run22 is the reference, no update needed.

EPD: will get to us.

GMT: will provide after the meeting.

MTD: ask Rongrong

sTGC: will get back to us


06/06/2023

RHIC Schedule

This week stochastic cooling transverse, (yellow done, but not blue)

toward 2x10^9 per bunch, 56x56 will be regular. 

06/07 APEX cancelled, sPHENIX access (07:00-18:00), then physics;

and 06/14 APEX


§ STAR status

· STAR at full field;

· zdc_mb = 750M over 176 hours; (~100M+ since yesterday)

· asymmetric vertex distribution for satellite bunch, but not the main peak. 

(could test without the crossing angle, 0.5mrad each, to see if the structure disappears)

· Blue Beam Background, due to fixed target we installed? The investigation indicated not related to the fixed target. FXT data yesterday, only see background at positive x horizontal plane;

Akio: perform BBC test today.

· Overnight shift observed a few blue tiles in EPD adc. Experts? Mike: two SiPMs died, two are the database issue. Will make a note to the shift crew.

· Triggers: 2 upc-jet triggers (3,17) should be promoted (back) to physics;

(From yesterday)

· pedestals

L2 waits for 2 minutes before stop run;

MXQ rms>50, very large; take another pedestal right after the fill;

EQ1,2,3,4 pedestals; mean>200; check daughter card (will be discussed at the Trigger Meeting today).

 

§ Plans

· Magnet will be ramped down tomorrow 8:30am by shift leader, and Prashanth will take out the key.

· Magnet: chill water pump issues, prepare to be fixed on Wednesday morning.

JH: Oil line of the chiller is the problem. A few hours expected and hopefully fix the issue.

· MCW fixes for the electronics, 9am Wednesday, 3 hours expected.

for the MCW fix: TOF LV needs to be off and the full list of subsystems will be sent on star-ops by Prashanth.

· TCU bits; Jeff/trigger plan for Wednesday down time with delay tests; (plan for the afternoon after the water work done, and will be discussed at the Trigger Meeting Tuesday June 06 noon)

· ETOW Crate #4 (W. Jaccob/Tim) on Wednesday? (Tim plans to fix this tomorrow, may need to replace a card in this crate)

· Spare QTD tests; Chris continues to work on it;

· DAQ5K, outer sectors; Tonko will do this on Thursday with beam

· Reference plots for online shift; experts of subsystems provide reference for a good run.


06/05/2023

1. RHIC Schedule

This week stochastic cooling transverse,

toward 2x10^9 per bunch, 56x56 will be regular; 

chill water pump issues, prepare to be fixed in next few days, but STAR at full field;

06/07 APEX cancelled, sPHENIX 8 hours access;

and 06/14 APEX


2. STAR status

a. zdc_mb = 645M over 159 hours;

zero field: zdc_mb = 45M

half field: zdc_mb = 17M

b. bunch crossing and vertex fingers;

maybe transverse SC will fix everything;

move beam 0.6mm and 0.3mm both directions;

still investigating; 

c. STAR chill water pump issues,

shift leader can ramp STAR magnet while beam is ON, but need to coordinate with MCR ahead of time; run well so far;

clean water tank on Wednesday; still searching for parts;

d. Blue Beam Background, due to fixed target we installed?

FXT data yesterday, only see background at positive x horizontal plane;

e. ZDCSMD ADC issues;

Chris reported gain file issue; understood and will be fixed; remove pxy_tac.dat file 

f. pedestals

L2 waits for 2 minutes before stop run;

MXQ rms>50, very large; take another pedestal right after the fill;

EQ1,2,3,4 pedestals; mean>200; check daughter card

g. dimuon trigger:

MXQ calibration is good; loose trigger time window than used to be;


3. Plans

a. Kong Tu is going to the period coordinator for next two weeks;

b. TCU bits; Jeff/trigger plan for Wednesday down time with delay tests;

c. Spare QTD tests; Chris works on it;

d. DAQ5K, outer sectors; Wednesday test during down time;

10 days on low luminosity; another week for high luminosity;

e. Reference plots for online shift;

f. Water group (coordination) starts in Wednesday morning, 3+ hours;


06/04/2023

1. RHIC Schedule

Thursday stochastic cooling longitudinal done, transverse next week,

1.3x10^9 per bunch, 56x56 will be regular; 

chill water pump issues, prepare to be fixed in next few days, but STAR at full field;

9PM-2AM no beam both Saturday and Sunday, sPHENIX TPC conditioning;

06/07 APEX cancelled, sPHENIX 8 hours access;

and 06/14 APEX


2. STAR status

a. zdc_mb = 450M over 143 hours;

zero field: zdc_mb = 45M

half field: zdc_mb = 17M

b. bunch crossing and vertex fingers;

storage cavity not fully functional, asymmetric?

Yellow (WEST) second satellite bunch colliding with blue main bunch;

keep it as is;

c. STAR chill water pump issues,

shift leader can ramp STAR magnet while beam is ON, but need to coordinate with MCR ahead of time; run well overnight so far

d. ZDCSMD ADC issues;

Hank confirmed the issues (potentially internal timing issue)?

all channels; NOT in EPD QTD; some features need further investigation;

work with Chris on this

e. Blue Beam Background, due to fixed target we installed?

a FXT test?

FXT configuration flip east vs west; DONE;

HLT needs to change to FXT mode, DONE; 

J.H. coordinates the fast offline (~0.5—1 hours);

f. eTOW out quite frequently (one crate is out);

g. pedestals

L2 waits for 2 minutes before stop run;

MXQ rms>50, very large; take another pedestal right after the fill;

EQ1,2,3,4 pedestals; mean>200; discuss it tomorrow?

Or give shift leader specific instruction to ignore specific boards;


3. Plans

a. TCU bits;

b. Spare QTD tests;

c. Blue beam background FXT test right after the meeting;

d. DAQ5K, outer sectors; Wednesday test during down time;

10 days on low luminosity; another week for high luminosity;

e. FCS monitoring trigger (discuss at triggerboard);


06/03/2023

1. RHIC Schedule

Thursday stochastic cooling longitudinal done, transverse next week,

56x56 will be regular; 

chill water pump issues, no full field until 8PM last night, tripped at 11PM.

sPHENIX magnet quench yesterday, ramp up successfully;

9PM-2AM no beam both Saturday and Sunday, sPHENIX TPC conditioning

06/07 APEX cancelled, sPHENIX 8 hours access;

and 06/14 APEX


2. STAR status

a. zdc_mb = 452M over 127 hours;

zero field: zdc_mb = 45M

half field: zdc_mb = 17M

b. STAR chill water pump issues, magnet trip at around 11PM last night

shift leader can ramp STAR magnet while beam is ON, but need to coordinate with MCR ahead of time; run well overnight so far

c. ZDCSMD ADC issues;

Han-sheng found and reported to QA board.

Does EPD see this feature in QTD?

fencing feature with one ADC count per bin;

d. Blue Beam Background, due to fixed target we installed?

a FXT test?

FXT configuration flip east vs west; TODAY;

HLT needs to change to FXT mode (Dayi)?

J.H. coordinates the fast offline?

e. Shift leader found a (significant size) snake in the assembly hall, moved it to RHIC inner ring area. If you spot one, can call police.


3. Plans

a. TCU bits

b. Spare QTD tests

c. Blue beam background FXT test


06/02/2023

1. RHIC Schedule

Thursday stochastic cooling longitudinal done, transverse next week,

56x56 will be regular; 

STAR magnet tripped yesterday morning, has not been at full power since;

chill water pump issues, no full field until 5PM tonight.

sPHENIX first cosmic ray track in TPC;

9PM-2AM no beam both Saturday and Sunday, sPHENIX TPC conditioning

06/07 APEX cancelled, PHYSICS data?

and 06/14 APEX



2. STAR status

a. zdc_mb = 405M over 117 hours;

zero field: zdc_mb = 40M

half field? zdc_mb and upc_main

b. a few changes in trigger conditions:

zdc killer bit applied on coincidence condition;

UPC-JPSI and UPC-jets requires eTOW in;

c. MTD QT noise is back, need to retake pedestal;

d. Cannot start chill water pump, start 5PM,

next few days, temperature low, should be able to run

e. BBC route to RHIC, blue background high


3. Plan

a. TCU bit work on-going

b. High luminosity configuration;


06/01/2023

1. RHIC Schedule

no beam from Tuesday 7:30 to Wednesday evening 8PM. 

Sweep experiment areas at 6PM Wednesday; physics data at 8:30PM;

1.3x10^9 per bunch, leveling at STAR;

sPHENIX magnet has been ON; 

Thursday stochastic cooling after this current store (56x56),

06/07 and 06/14 APEX


2. STAR status

a. zdc_mb = 385M

b. Access task completion:

BEMC done, MTD BL-19 sealant to gas connector for minor gas leak;

BBC scaler, fixed a dead channel (move from #16 to different label),

need to route from DAQ room to RHIC scaler;

ZDC TCIM: fixed a broken pin and dead processor,

setting deadtime for scaler output (was 20us, set to 1us)

gain to sum output set to 1:1 (was 1:0.5)

Pulser to TCU: 3 TCU bits out of time, need look into this;

sTGC 4 FEEs did not improved (still dead)

EPD 2 channels remap done; QTD into spare slot;

VPD MXQ calibration does not look correct; contact Isaac/Daniel

c. Trigger condition updates, and production IDs

all physics triggers are promoted to production ID;

EJP trigger 10x higher; hot towers?

UPC-JPSI trigger too high after access; ETOW was out while related triggers are IN; 

set up reasonable range expected with color scheme for DAQ monitoring;

Jeff and the specific trigger ID owners

reference plots, still run22 plots for online shift crew; need to work on this once the beam and STAR operation are stable (next few days)

d. Magnet trip this morning at 9:29AM

bringing back the magnet in progress;

no errors on our detector; beam loss 3 minutes later;

magnet is back up;

magnet temperature is high; work in progress; down to 0 and

call chill water group;


3. On-going tasks and plans

a. BBC scaler need to route from DAQ room to RHIC scaler;

b. ETOW readout is out but trigger is ON;

Jeff need to set up a scheme for eTOW related trigger when ETOW is out;

c. TCU bits, trigger group continues the work on bit issues using the pulser

d. QTD, chris will look into the one we just put back into EQ4

e. MXQ VPD need further work on calibration

JEVP online plot of BBQ VPD vertex distribution missing;


05/31/2023

1. RHIC Schedule

no beam from Tuesday 7:30 to Wednesday evening 8PM (access could be up to 6PM). 

Sweep experiment areas at 3PM Wednesday;

1.3x10^9 per bunch, leveling at STAR; 

Thursday stochastic cooling first,

then sPHENIX magnet ON exercise, we should probably put STAR detector on safe status for the first fill. 

06/07 and 06/14 APEX


2. STAR status

a. Trigger condition updates, and production IDs

promote everything from BTH, UPC, UPC-JPsi (prescale not decided);

dimuon-MTD;

UPC-jets, UPC-photo;

zdc_mb_counter no production ID, zdc_mb_y and zdc_mb_ny removed

b. Another two incidents of DO and shift crew did not show up

DO from SBU started Wednesday owl shift

c. Water tower work plan in a couple of weeks


1. Access plans for Tuesday and Wednesday

a. Magnet OFF Wednesday

BEMC and MTD BL8 (done) work

MTD gas leak BL19 (11:30) Rongrong/Bill

b. Pulser for TCU bit checking Christian/Tim 107ns pulse; connected, waiting for jeff test

c. Laser in progress

d. MTD/VPD splitters (swap out with a spare) not done yet, 3 dead channels, Christian/Tim

e. EPD QTC remapping two QTC channels happens today;

QTD put into the crate to EQ4 spare slot? 

f. sTGC 4 FEEs no signals, reseat cables (magnet OFF) on-going

g. BBC B&Y background signals, single and coincidence issues to RHIC Blue background;

h. BCE crate errors; fixed by Power cycle

i. Measurement of dimensions of rails for EIC (Rahul/Elke, 12-1PM)


05/30/2023

1. RHIC Schedule

no beam from Tuesday 7:30 to Wednesday evening. 

Sweep experiment areas at 3PM Wednesday;

1.3x10^9 per bunch, leveling at STAR; 

Vacuum issues with store cavity in both yellow and blue, BPM issues, debunch issues on Monday 1 hour store;

Thursday stochastic cooling first,

then sPHENIX magnet ON exercise, we should probably put STAR detector on safe status for the first fill. 

06/07 and 06/14 APEX


2. STAR status

a. Trigger condition updates, and production IDs

promote everything from BTH, UPC, UPC-JPsi (prescale not decided);

dimuon-MTD;

Not promoted on UPC-jets, UPC-photo;

b. TPC Cathode trips during beam dump;

change procedure on TPC Cathode turn OFF before beam dump and right after beam dump, turn cathode back ON;

eTOF standby with high current a few days ago; 

c. Air conditioners in trailer (Bill will check on this)

d. Trigger BCE crate, dsm1 STP error, took out BCE crate;

update outdated document (on removing BBC crate);

e. Arrange for sTGC/MTD HV crate repairs

f. FST refill coolant


1. Access plans for Tuesday and Wednesday

a. Magnet OFF Wednesday

BEMC and MTD BL8 work

MTD gas leak BL19 (maybe) Rongrong/Bill

b. Pulser for TCU bit checking

Christian/Tim 107ns pulse;

c. Laser

d. MTD/VPD splitters (swap out with a spare)

e. EPD QTC remapping two QTC channels 

f. sTGC 4 FEEs no signals, reseat cables (magnet OFF)

g. BBC B&Y background signals, single and coincidence issues to RHIC

h. BCE crate errors

i. Measurement of dimensions of rails for EIC (Rahul/Elke, 12-1PM)


05/29/2023

1. RHIC Schedule

Beam for physics over the long weekend, (56 bunches);

No 9AM meeting over long weekend, no beam from Tuesday 7:30 to Wednesday evening. 

Sweep experiment areas at 3PM Wednesday;

1x10^9 per bunch (+20%); 16KHz zdc rate; STAR request leveling at 10KHz for about 10-20minutes;

automatic script does not work yet.

No stochastic cooling now; one of the five storage cavities in Yellow failed; store length is about 1.5 hours;

1.3x10^9 per bunch, leveling at STAR; 


2. STAR status

a. Trigger condition updates, and production IDs

promote everything from BTH, UPC, UPC-JPsi (prescale not decided);

nothing from UPC-jets, UPC-photo, dimuon-MTD;

b. MTD calibration is done; tables uploaded,

need to apply the TAC cuts, and then production ID:

MXQ VPD maybe minor issues need to address

c. Water out of the cooling tower, this is by design for more efficient cooling; small AC unit to cool down the chill water

d. Replaced MTD PS crate (Dave), was successful;

need to ship the spare for repair; currently use sTGC spare for operation

Tuesday access to check HV mapping

e. FST additional latency adjustment;

FST in pedestal runs

f. Add eTOF into TOF+MTD noise run if eTOF is operational


3. Access plans for Tuesday and Wednesday

a. Magnet OFF Wednesday

BEMC and MTD BL8 work

b. Pulser for TCU bit checking

c. Laser

d. MTD/VPD splitters

e. EPD QTC west daughter card need to swap out?

performance seems to be OK, need further check before swap;

Christian/Tim swap whole module?

f. sTGC 4 FEEs no signals, reseat cables 

g. BBC B&Y background signals, single and coincidence issues to RHIC


05/28/2023

1. RHIC Schedule

Beam for physics over the long weekend, (56 bunches);

No 9AM meeting over long weekend, no beam from Tuesday 7:30 to Wednesday evening. 

Sweep experiment areas at 3PM Wednesday;

1x10^9 per bunch (+20%); 16KHz zdc rate; STAR request leveling at 10KHz for about 10-20minutes;

automatic script does not work yet.

No stochastic cooling now


2. STAR status

a. Trigger condition updates, and production IDs

promote everything from BTH, UPC,

nothing from UPC-jets, UPC-photo,

elevate on UPC-JPSI triggers

b. Trigger event too large, some crashed L2,

zdc_mb_prepost prepost set to +-1 (was -1,+5)

c. tune_2023 for calibration and test;

Production should be for production ONLY

d. RHIC leveling STAR luminosity at 10KHz ZDC rate, STAR request this.

e. Event counts: zdc_mb = 218M

f. FST latency adjustment is done;

4 APV changed by 1 time bin


3. On-going tasks and plans

a. EPD bias scan done;

a couple of channels have been adjusted;

higher threshold for zero suppresson; Need to implement;

gate on C adjusted; TAC offset and slewing corrections

b. MTD calibration 

c. Fast Offline st_physics events not coming out today

d. TOF noise rate does not need to be taken daily if there is

continuous beam injection and Physics


4. Access plans for Tuesday and Wednesday

a. Magnet OFF Wednesday

BEMC and MTD BL8 work

b. Pulser for TCU bit checking

c. Laser

d. MTD/VPD splitters

e. QTC west daughter card need to swap out?

Christian/Tim swap whole module?

f. sGTC 9 FEEs no signals, reseat cables 

g. BBC B&Y background signals, single and coincidence issues to RHIC


05/27/2023

1. RHIC Schedule

Beam for physics over the long weekend, (56 bunches);

No 9AM meeting over long weekend, no beam from Tuesday 7:30 to Wednesday evening. 

Sweep experiment areas at 3PM Wednesday;

ZDC_MB =~ 5KHz

no stochastic cooling; landau cavity for blue tripped yesterday,

rebucket vs landau cavity RF 56 bunches every other bunches in phase,

changed fill pattern, solved the trip issue. Leveling works at 10KHz, automatic script does not work yet.


2. STAR status

a. Trigger condition updates, and production IDs

UPC_JPsi, ZDC HV and production ID;

UPC_JET; UPC_photo no in Production ID;

FCS bit labels not changed yet; and new tier1 files are in effective; 

need clarification today.

b. Any remaining trigger issues? (-1,+5)? zdc_mb_prepost

RCC plot no updating;

c. EPD scans

timing scan done; 4 channel stuck bit;

bias scan next; onl11,12,13 for online plotting cron servers;

zero suppression 30-40% occupancy 0.3MIP (~50)

d. MXQ VPD calibration done, MTD calibration next

e. BBC B&Y background scalers not working

Christian has a pulser; order a few more?

f. Confusion about FST off status and message

DO need to make sure FST OFF

g. Jamie’s goal tracking plots? zdc_mb, BHT3?

h. eTOF ran for 6 hours, and failed,

If failed, take out of run control;

eTOF follows HV detector states as TOF for beam operation;

i. TPC, drift velocity changes rapidly; new gas?

new vender, old manufactory; online shows stable


3. On-going tasks and plans

a. Pulser for TCU, MTD BL8 and BEMC work on Wednesday

b. sTGC FEE reseat the cable on Wednesday; Magnet OFF

c. ESMD overheating; inspect on Wednesday, talk to Will Jacobs

d. East laser tuning Tuesday


05/26/2023

1. RHIC Schedule

Beam for physics over the long weekend, (56 bunches);

No 9AM meeting over long weekend, no beam from Tuesday 7:30 to Wednesday evening. 

Sweep experiment areas at 3PM Wednesday;

Blue beam Landau cavity tripped, beam loss ½ at beginning and see to light up iTPC;

Stochastic cooling will setup hopefully today; no expert available today, over the weekend;

three-hour fill with Landau cavity on (or without if it does not work)


2. STAR status

a. We had a couple of incidents that shift crew and shift leader did not show up; please set your alarm, it is an 8-hour job, try to rest/sleep in the remaining of the day

b. Laser, DO always need to continue the intensity

need to pass the experience to evening shifts

c. zdc_mb = 65M

d. VPD calibration; BBQ done, MXQ not done, dataset done

e. MTD dimuon_vpd100 out until expert calls

f. L4 plots are not updating; online plot server is not available;

g. FST fining tuning on latency setting; update online plot;

beam with updated online plot;

h. New production ID; vpd100, BHT#? BHT3?


3. On-going tasks and plans

a. Pulser for TCU monitoring;

b. sTGC 4 FEE not working;

HV scan, gain matching; (Prashanth/Dave instructions)

c. L2ana for BEMC

The l2BtowGamma algorithm has been running. L2peds have not been, Jeff just restored them.

d. QTD

Chris fixed the issue, EPD looks good;

QTC looks good;

pedestals width large when EPD ON

ON for the mean, MIP shift correlated with noise rate?

gain QTD>QTC>QTB

Eleanor new tier1 file?

afterward, EPC Time, gain, offset, slewing, zero-suppression items

QTB->QTD swap back? Wait for trigger group?

leave it alone as default

ZDC SMD ADC noisier, but it is OK.


05/25/2023

1. RHIC Schedule

another PS issue, and storage cavity problem,

Stores will be back to 56 bunches after stochastic cooling commissioning first with 12 bunches and sPHENIX requested 6 bunches 


2. STAR status

a. No beam yesterday and this morning

b. Laser, DO always need to continue the intensity

c. zdc_mb = 50M

d. VPD slewing waiting for beam


3. On-going tasks

a. QTD issues,

LV off taking pedestal file

threshold and readout speed

Chris confirmed by email that indeed 0-3 channels in QTD

are off by 1 bunch crossing on bench test;

Chris and Hank are going to discuss after the meeting

and send out a summary and action items later today.

I feel that we may have a resolution here


05/24/2023

1. RHIC Schedule

Abort kicker power supply issue (blue beam), no physics collisions since yesterday.

They may do APEX with just one beam;

Stores will be back to 56 bunches after stochastic cooling commissioning first with 12 bunches


2. STAR status

a. zdc_mb = 50M

b. VPD slewing and BBQ upload done,

NEXT MXQ 

c. sTGC sector#14 holds HV;

a few FEEs do not show any hits;

d. sTGC+FCS in physics mode

FST still offline, need online data QA to confirm

Latence adjustment,

e. eTOF HV on, included in run

OFF during APEX


3. On-going tasks

a. TCU pulser another test during APEX


4. Plans for the week and two-day access next week

a. MTD calibration and dimuon trigger after VPD done

b. EPD bias scan and TAC offset and slew correction

c. Next week, electronics for pulser in the hall (Christian)

d. Wednesday BEMC crate ontop of magnet PS fix (Bill/Oleg)

e. Wednesday MTD BL-8 THUB changed channel (Tim)

f. Plan for resolving QTD issues:

before Sunday,

taking data with zdc_mb_prepost (-1,+2) in production;

Aihong observed changes in ZDC SMD signals when BEMC time scan;

Jeff will follow up on what time delays in TCD on those scans; 

After Sunday, Chris will do time scan or other tricks to figure out what

the issues with QTD; We need a clean understanding of the issues and solutions; If this is NOT successful, 

Wednesday replace all QTD by QTB and derive a scheme to selective readout QTB for DAQ5K for both BBQ,MXQ (EPD and ZDCSMD).

Mike sent out a scheme for EPD 


05/23/2023

1. RHIC Schedule

MCR working on stochastic cooling, longitudinal cooling first, will reduce the background seen at STAR and sPHENIX. 

Stores will be back to 56 bunches after stochastic cooling commissioning first with 12 bunches


2. STAR status

a. TPC in production, DAQ 5K tested this morning with iTPC sector, TPC current looks good;

Deadtime rate dependent, outer sector RDO optimization for rate (Gating Grid); 15KHz to saturate the bandwidth; Tonko would like to keep ZDC rate to be high (~5KHz)

b. EPD gain and time scan

Timing scan last night and set for 1-3 crates, EQ4 very different timing,

need update on individual label for setting; need this for next step bias scan; QTD first 4 channels signals low (1/20); same observed in ZDC SMD; Eleanor needs to change the label in tier1 file, tune file, and Jeff moves it over. QTD->QTB replacement works.

c. VPD scan

Daniel and Isaac BBQ data using HLT files for fast calibration;

VPD_slew_test from last year (BBC-HLT trigger)

MXQ board address change? Noon:30 trigger meeting;

d. BSMD time scan; scan this morning, will set the time offset today


3. On-going tasks

a. ZDC SMD QTD board issues

ZDC SMD QTD shows same issues with first 4 channels

MXQ power cycled, SMD readout is back

pre-post +-2 zdc_mb trigger data taking after the meeting

b. TCU bit test with the pulser RCC->NIM Dis->Level->TTL->RET

bit to TCU 12,15

c. Some triggers are being actively updated, BHT4 UPCjet at 13

d. Adding more monitoring trigger (ZDC killer bits)

plan: discuss at trigger meeting; pulser 100ns


4. Plans for the days

a. FCS close today?

coordinate with MCR for a short controlled access today

b. BSMD helper from TAMU

BSMD only operates at high luminosity

ESMD only operates at high luminosity

Will discuss action items at later time


05/22/2023

1. RHIC Schedule

access at 10AM 2 hours of controlled access.

Stores will be back to 56 bunches after stochastic cooling commissioning first with 12 bunches


2. STAR status

a. TPC in production, DAQ 5K is NOT ready yet,

outer sectors firmware optimization, need about 3 weeks,

rate at about 3KHz, 

laser runs well,

b. sTGC sector 14, masked out, you will do the checking behind scene,

sTGC and FST will be in production

c. FCC, watch the luminosity and background for the next few days, decide whether we close the calorimetry

d. Trigger system, tcu bit slot#21-35 BAD, BHT1, dimuon, zdc_mb_gmt

a few other triggers with high scaler deadtime, zdc_killer should discuss at triggerboard meeting,

TCU spare daughter card good, two spare motherboards,

highest priority,

e. TOF

no issues in production

f. VPD

working on slewing correction, an issue with TAC offset with MXQ

VPD MXQ one and BBQ two channels (Christian is going to check them next access)

g. ZDC and ZDC SMD

SMD timed correctly, need Aihong to check again

SMD no signal at QT

h. EPD

replace EQ4 QTD now

EPC time scan and LV bias scan tonight,

Need to do time and offset matching among tiles, need more time,

i. BEMC is timed, one crate on top of magnet stopped sending data, never seen such failure (coincide with the beam dump), 3% of total channels

j. BSMD in middle of time scan BSMD02 failed,

need pedestal online monitoring helper (star management follows up)

k. FCC need to close position, LED run procedure, trigger not commissioned, stuck bit need to re-routed, thresholds need to be discussed, a week from today

l. MTD, Tim THUB push in, trigger needed VPD and MTD timing calibration

m. Slow control

fully commissioned, MCU unit for sTGC, more resilent against radiation,

HV IOC updated, trip level set according to luminosity

TOF and MTD IOC updated (fixed connection issues)

need update instruction procedure

SC general manual updates.

n. Fast Offline

started on Friday, and processing st_physics and request to process st_upc streams, st_4photo?

QA shift fast offline in China, google issues, alternative path to fill histograms and reports

o. FST, commissioning,

Field OFF beam special request after everything ready


05/21/2023

1. RHIC Schedule

No 9AM CAD meeting. Stores with 56 bunches, will continue over the weekend,

Potential access Monday for RHIC work, sPHENIX and STAR


2. STAR status

a. production_AuAu_2023 TRG+DAQ+ITPC+BTOW+ETOW+TOF+GMT+MTD+L4+FCS

Fixed a few issues yesterday, zdc_mb promoted to production ID.

TCU hardware issue, avoid tcu slot#21-25

Need to check whether same issue occurs with other tcu slots: external pulse (Christian)

b. Fix blue beam sync bit

c. Fix L4 nhits 

d. ESMD time scan done

e. TPX/iTPC done

f. UPS battery and the magnet computer dead, need replacement by CAS


3. Ongoing tasks

a. VPD scan for slew correction, update to “final”, QTC in BBQ and MXQ

pedestal run needed to apply the slewing and offset corrections

L4 needs new VPD calibration file.

VPD TAC look good now after pedestal run, last iteration will be done.

VPD on BBQ is fine, but need to check MXQ

b. Minor issues need work on TPC

c. Fast offline production comes (contact Gene)

d. BSMD one of two PCs has memory errors,need to swap out in DAQ room

e. EPD time and bias scan after QTD replacement

f. MTD one backleg need work (canbus card need push-in, magnet off, need VPD calibration done)

g. Beam loss at 10:30 chromo measurement, beam abort unexpectedly, MCR called STAR informing about the measurements, But the CAD system puts “PHYSICS ON” and STAR shift turned on the detector, thought that MCR was done with the measurement and PHYSICS is ON. Mitigation is to make sure that the information (calls and instructions) from MCR should overwrite the BERT system.


4. Plan of the day/Outlook

a. Collision stores over the weekend

b. Access Monday

c. FCS position, wait until we get more information about the abort, takes 15 minutes to close.

d. sTGC status and plan?

e. FST is good status, will check if further calibration is needed

f. Monday magnet OFF during access? Shift leader


Confirm with Christian about access Monday


05/20/23

I. RHIC Schedule

 Stores with 56 bunches since yesterday evening, will continue over the weekend


II. STAR status

 production_AuAu_2023 TRG+DAQ+ITPC+BTOW+ETOW+TOF+GMT+MTD+L4+FCS+STGC+FST


III. Ongoing tasks

 Production configuration, trigger rates, BBC tac incorrect

 Autorecovery for TPX not available, crews powercycle the relevant FEE

 EPD bias scan to resume today, timing scan for QTD

 VPD tac offsets taken overnight, slew correction to take

 Series of sTGC HV trips after beam loss yesterday evening, keep off over weekend

 BSMD, ESMD need timing scan

 zdc-mb production id

 Access requirements, list of the needs


IV. Plan of the day/Outlook

 Collision stores over the weekend


05/19/23

I. RHIC Schedule

 We had stores with 56 bunches till this morning.

 Possible access till 11am, beam development during the day

 Collisions overnight


II. STAR status

 tune_2023 TRG+DAQ+ITPC+BTOW+ETOW+TOF+GMT+MTD+L4+FCS+STGC+FST running overnight

 ZDC HV calibration done

 

III. Ongoing tasks

 TPX prevented starting the run, Tonko working on it, ok now

 EEMC air blower is on, chill water not yet

 BSMD had corrupt data in bsmd02 in cal scan

 EPD calibrations ongoing, work on QTD, ok for physics

 eTOF worked on by experts

 VPD HV updated, will do TAC offsets

 sTGC plane 2 is empty in some place

 Production trigger configuration by Jeff today


IV. Plan of the day/Outlook

 Possible access till 11am

 Beam development during the day

 Collision stores overnight and during the weekend


05/18/23

I. RHIC Schedule

We had store with 56 bunches till this morning.

1 - 3 stores are scheduled today overnight

Beam development during the day, opportunity for controlled access


II. STAR status

Runs with tune_2023 TRG+DAQ+ITPC+TPX+BTOW+TOF+GMT+MTD+L4+FCS+STGC overnight

Done with BBC gain scan, and EPD scan without EQ4, BTOW timing scan without ETOW


III. Ongoing tasks

EEMC turn on (email by Will J.), BTOW + ETOW timing scan in upcoming store

VPD-W, cat-6 to be connected, VPD data from this morning ok, VPD should be off till then, controlled access needed with magnet off

sTGC ROB #13 has TCD cable disconnected, needs fixed or masked out, access with magnet off

EQ4 does not run for EPD, 25% of the detector not available, ongoing with trigger group

Trigger FPGA issues in the beginning of the store, could not get past 15 events, started to take data when different part of the FPGA was used (temporary workaround)

TOF LV yellow alarms

BSMD timing scan (Oleg, tonight) + endcap shower max


IV. Plan of the day/Outlook

Beam development during the day for rebucketing

Opportunity for controlled access after rebucketing is done (work on collimators)

Collision stores (1 - 3 stores) overnight, no crossing angle


05/17/23

I. RHIC Schedule

Restricted access till 6pm (scheduled)

First collisions today early overnight


II. Ongoing tasks

Access ongoing for poletip (scheduled till 6pm), reinsertion in progress

All TPC RDOs were replaced yesterday and tested

FST tested ok, water leak is fixed

TPC lasers, work in progress on control computer, waiting for new repeater, for now works only on the platform


III. Plan of the day/Outlook

Access till 6pm, poletip insertion, will finish earlier (before 4pm)

Collisions early overnight, could be in 2 hours after the access is done, lower intensity because of no stochastic cooling for now

Cosmics + lasers after poletip closed and magnet on


05/16/23

I. RHIC Schedule

Restricted access till 10pm.

Beam ramps overnight, both beams

First collisions as early as Wednesday night, likely on Thursday


II. Ongoing tasks

Poletip removal in progress, access till 10pm today + access tomorrow till 6pm

TOF LV for tray 18 west 2 was too low, the channel was swapped to a spare (ok), work in progress on GUI update


III. Plan of the day/Outlook

Access till 10pm, beam development overnight

Collisions on Thursday


05/15/23

I. RHIC Schedule

Restricted access ongoing till 2:30pm to prepare for poletip removal

Beam development overnight, blue and yellow ramps

First collisions on Wednesday night, 6 bunches


II. Ongoing tasks

Preparation for poletip removal (BBC, EPD, sTGC), access today till 2:30pm

ETOW and ESMD off (FEE LV and cooling water)

TOF LV is too low for tray 18 west 2, caused high trigger rate, taken out of the run, call to Geary, mask it off now

MTD THUB-N new firmware (Tim today, behind the barrier)

Tier-1 for timing on Wed (Jeff+Hank)

Inform SL over zoom of any work from remote, like ramping up/down HV/LV

sTGC LV to standard operation in the manual (David)


III. Plan of the day/Outlook

Access till 2:30, likely done earlier, beam development overnight

Collisions on Wednesday night, 56 bunches (10 kHz) + then 6 bunches for sPHENIX

2022 ops meeting notes

Notes from STAR Operations Meeting, Run 22


04/18/2022

I.   Summary of Operations:
  • Forward system 107% goal reached, 98% goal for central (BHT3) program reached - congratulations to all!
  • Smooth data taking 
  • 6 hrs CeC after 17:30 
  • sTGC gas alarm due to malfunction in HSSD, which was checked and fixed during CeC
II. STAR Schedule
  • Today
    • Shutdown activities Shift leader and a DO shall be present in the CR
 
III. Items from shifts, last 24 hrs:
  •  Day Shift
    • Physics data taking with minor problems 
    •  
  •  Evening Shift
    • Cosmics during CeC
    • sTGC gas alarm due 
  •  Overnight Shift
    • Continue data taking with few issues
    • Beam dumped at about 8 as planned
IV.  To Do:
  • Gas purge
  • Other shutdown activities
  • Shift leader and one DO in the CR as long as the purge is going on, certainly during the day
  • Do we need eve. shift if the purge is complete?
  • DAQ shut down…


04/17/2022

 

I.   Summary of Operations:
  • Smooth data taking 
  • 4 hrs CeC after 16:30 -22:00
  • Three occurrences of DAQ control window crashing - restored by Shift Leader, no data los
  • CeC can take time “at will” between now and the end of the run
II. RHIC Schedule
  • Today
    • Physics for STAR
    • Possible CeC
  • Tomorrow 
    • End of run at 8:00
    • TPC purge after beam dump at 8 am- Shift leader and a DO shall be present in the CR
 
III. Items from shifts, last 24 hrs:
  •  Day Shift
    • Physics data taking with minor problems 
    •  
  •  Evening Shift
    • Cosmics during CeC
    • Problem with ETOW FEE crates taken care of with Scott’s help
    • Data taking after 22:30
  •  Overnight Shift
    • Continue data taking with few issues
    • Beam dumped at 7:05
IV.  To Do:
  • Efforts to include ETOF in data taking shall continue
  • Watch QA plots for EEMC because of eemc-dc error message
  • Follow instructions and contact EEMC expert in case of problems, contact person Scott Wissink
  • TPC purge - requires Shift Leader and DO present
  • Hank to contact Jack about STP and shall email starops the recommendations


04/16/2022

 

I.   Summary of Operations:
  • pCarbon 2hrs done, 10 min zdc polarimetry runs taken during that period
  • CeC after pCarbon in the evening 
  • Beams back at about 22:45
  • CeC can take time “at will” between now and the end of the run
  • Damper test at injection sometime this weekend
  • Messages can't connect to eemc-dc -- data corruption possible
  • Wayne came a checked the communications, nothing was found, no access during CeC was needed
  • EEMC online QA shows no problems
  • TPC sector 12 has one hot pixel (intermittently), experts know about it
II. RHIC Schedule
  • Today
    • Physics for STAR
    • Possible CeC
  • Tomorrow 
    • Physics for STAR
    • Possible Cec
 
III. Items from shifts, last 24 hrs:
  •  Day Shift
    • Physics data taking with minor problems
    •  
  •  Evening Shift
    • pCarbon polarimetry 
    • Daily calibration runs
    • Cosmics during CeC
    • Data taking after 22:45
  •  Overnight Shift
    • Continue data taking with few issues
    • Beam dumped at 6:45
    • Some issues with FCS triggers but online QA ok, reboot trigger and/or reboot all fixed the issue
IV.  To Do:
  • Efforts to include ETOF in data taking shall continue
  • Watch QA plots for EEMC because of eemc-dc error message
  • Follow instructions and contact EEMC expert in case of problems, contact person Scott Wissink


04/15/2022

 

I.   Summary of Operations:
  • Access to fix communications 9 - 11:30, during which:
    • Alexei checked the water leaks.
    • Wayne and Tim helped us bring the sTGC LV, FCS connections back. Also cleared the EEMC crate booting issues.
    • Wayne and Tim noticed the EEMC HVSys A module was dead. Tim replaced the module, restored HV branches. EEMC are now all back up.
  • pCarbon 2hrs on not done yet, TBC
  • CeC in the evening 6:30 - 11 pm
  • Vernier scans done
  • Beam lost at about 4 am
  • CeC can take time “at will” between now and the end of the run
  • Damper test at injection sometime this weekend
II. RHIC Schedule
  • Today
    • Physics for STAR
    • pCarbon session at the end of this current store
    • Detectors in ready to dump state during pCarbon, run ZDC polarimetry only
    •  
  • Tomorrow 
    • Physics for STAR
 
III. Items from shifts, last 24 hrs:
  •  Day Shift
    • Access, fixed number of problems, see above
    • Physics data
    •  
  •  Evening Shift
    • Physics data till 18:30
    • Daily calibration run
    • Cosmics during CeC
  •  Overnight Shift
    • Data taking quite smoothly started at ~ 0:45 
    • Beam loss
    • Data taking resumed at the end of the shift ~ 7:10
IV.  To Do:
  • Efforts to include ETOF in data taking shall continue
  • Use 10 min ZDC polarimetry runs during pCarbon and reconcile the time stamps after the fact


04/14/2022

 

I.   Summary of Operations:
  • APEX started  8:00 - 24:00 yesterday 
  • pCarbon 2hrs on Thursday 4/14 at the end of a store, TBC
  • Expect CeC requesting time intermittently
  • Two more vernier scans back to back this week
  • EEMC problem diagnosed - fixing requires access - fixed during morning access
  • sTGS LV and FCS control communications problem developed during owl shift - fixed during morning access
II. RHIC Schedule
  • Today
    • Access 9 am - done, very successful. All known and new found problems were fixed. See shift log for details
    • Physics for STAR
    • pCarbon session at the end of a store - the upcoming store
    • Vernier scans, two in one store - also the upcoming store
    •  
  • Tomorrow 
    • Physics for STAR
 
III. Items from shifts, last 24 hrs:
  •  Day Shift
    • APEX
    •  
  •  Evening Shift
    • APEX
  •  Overnight Shift
    • Data taking quite smoothly
    • sTGC LV and  FCS communications problem developed, data seem OK
IV.  To Do:
  • Efforts to include ETOF in data taking shall continue
  • Access to fix EEMC and communication problems as described above - all fixed during acces


04/13/2022 

 

I.   Summary of Operations:
  • Smooth data taking until 1 pm
  • CeC dedicated 4 hrs 
  • Beam resumed about 18:00 data taking continued
  • APEX started at 8:00 today
  • There was an attempt to include ETOF in data taking, but not successful
  • pCarbon 2hrs on Thursday 4/14 at the end of a store, TBC
  • Expect CeC requesting time intermittently
  • Two more vernier scans back to back this week
II. RHIC Schedule
  • Today
    • APEX 8:00 - 24:00, possibly till 19:00 only…
    • Physics for STAR if possible
    • No cosmic ray data because of beam at RHIC
  • Tomorrow 
    • Physics for STAR
    • pCarbon session at the end of a store
    • Vernier scans, two in one store
 
III. Items from shifts, last 24 hrs:
  •  Day Shift
    • Cosmic ray data during CeC
    • Daily tasks done
    • Geary and Norbert worked on ETOF
    •  
  •  Evening Shift
    • Data taking after CeC
    • Usual issues like RDOs, etc.
  •  Overnight Shift
    • Data taking quite smoothly
    • Power cycle sTGC
    • Few issues with ETOW, ESMD, experts were contacted still working, see email from Will
IV.  To Do:
  • Efforts to include ETOF in data taking shall continue
  • Tonko is recovering RDOs - done
  • EEMC problems - help from slow controls/network experts is needed


04/12/2022

 

I.   Summary of Operations:
  • Recovery from morning QLI took a long time, started taking data again at 18:30
  • Taking data smoothly after the recovery
  • Controlled access was used to power cycle the FST cooling crate, issue fixed.
  • Run list does not show in ShiftLog in the shift report for the last two shifts
  • APEX on Wednesday 4/13, 8:00- 24:00
  • Possible pCarbon 2hrs on Thursday 4/14 between store
  • Expect CeC requesting time intermittently
  • Two more vernier scans back to back this week
II. RHIC Schedule
  • Today
    • Physics for STAR
  • Tomorrow (rest of the week, until further notice)
    • APEX 8:00 - 24:00
    • Physics for STAR
 
III. Items from shifts, last 24 hrs:
  •  Day Shift
    • QLI recovery till 18:30
    • Daily tasks done
    • Took cosmic ray data
    •  
  •  Evening Shift
    • QLI recovery till 18:30
    • Took cosmic ray data
    • Data taking continued quite smoothly, with few issues
  •  Overnight Shift
    • Data taking with few issues
    •  
IV.  To Do:
  • Efforts to include ETOF in data taking shall continue
  •  



 

04/11/2022 – Monday

 

I.   Summary of Operations:
  • Taking data with usual problems at STAR
  • STAR reached 90% of figure of merit goal
  • Yesterday power dip caused RHIC and STAR magnet trip, 
  • Two QLI (magnet quenches) at RHIC: 3:26 and 8:26 am
  • Data taking resumed at about 7 am after the first QLI
  • CeC access now behind recovery from QLI
  • Controlled access opportunity now till about 11 am
  • No CeC  on Sunday 
  • Number of RDOs needed to be masked out
  • David reduced trip limit in sTGC to 80 microamps
  • Kolja and shift leaders suggest to reevaluate need for the emc check before the first run in a new fill: MCR ramps from flattop after only a brief period (with negligible ZDC AND). By the time emc-check is possible, physics is already declared. Is the emc-check needed/helpful under these conditions? We don’t change anything
  • Two more vernier scans this week back to back 
  • II.  RHIC Schedule:
  • Today
    • CeC access during recovery
    • Physics for STAR
  • Tomorrow (rest of the week, until further notice)
    • Physics for STAR
 
III. Items from shifts, last 24 hrs:
  •  Day Shift
    • Power dip caused STAR magnet trip
    • Quite smooth data taking with few minor issues
    • Tonko brought four RDO's back to life, iS5-1, iS9-3, iS10-2, iS10-3
    • Another trip in sTGC plane 1 cable 6.
    •  
  •  Evening Shift
    • Data taking continued quite smoothly, with few issues
  •  Overnight Shift
    • Data taking until magnet quench at RHIC at about 3:30 am
    • Taking cosmics after that
    • Data taking resumed at about 7 am
IV.  ToDo:
  • Efforts to include ETOF in data taking shall continue

 

04/10/2022 – Sunday

 

I.   Summary of Operations:
  • Taking data continuously with few bumps  
  • New note by Jeff and Hank on how to handle Trigger DAQ problems with dead times / crate failures DURING PHYSICS RUNS was distributed to the shift leaders and uploaded on STAR operations Drupal page 
  • CeC  4 hrs 
  • Attempt to put ETOF in cosmic ray run was not successful
  • Access at STAR by Tim to do cable swap, which was a success
  • This morningTOF issue was very efficiently recognized by shift crew. Rongrong identified offending tray #32 in TOF, which was giving multiplicity 15 and above.  The tray was masked out. 
  • Intensity reduced by 5%
  • Number of RDO failures is reduced, most likely because of the above
II.  RHIC Schedule:
  • Today
    • Physics for STAR
    • CeC 16:00 - 20:00, flexible with the store end, dedicated?
  • Tomorrow (rest of the week, until further notice)
    • Physics for STAR
 
III. Items from shifts, last 24 hrs:
  •  Day Shift
    • Quite smooth data taking with few minor issues
    • Tonko brought four RDO's back to life, iS5-1, iS9-3, iS10-2, iS10-3
    • Another trip in sTGC plane 1 cable 6.
    •  
  •  Evening Shift
    • Data taking continued quite smoothly, with few issues
    • Beam lost at ~ 23:30
  •  Overnight Shift
    • Very smooth running all night.  No interruptions except for beta squeezes.
IV.  ToDo:
  • Check if CeC intends to use their four hours, if so make sure that it matches beam dump and fill
  • Efforts to include ETOF in data taking shall continue
  • David to reduce trip limit in sTGC to 80 microamps after this meeting


04/09/2022 – Saturday

 

I.   Summary of Operations:
  • Taking data continuously with few bumps  
  • New note by Jeff and Hank on how to handle Trigger DAQ problems with dead times / crate failures DURING PHYSICS RUNS was distributed to the shift leaders and uploaded on STAR operations Drupal page 
  • CeC on Friday was cancelled
  • Attempt to put ETOF in the run was not successful
  • Lost beam due to problem Beam Loss Monitor
  • Sat, Sun CeC 16:00 - 20:00 possible dedicated, flexible depending on the store end, can be used for access at STAR
II.  RHIC Schedule:
  • Today
    • Physics for STAR
    • CeC 16:00 - 20:00, flexible with the store end, dedicated
  • Tomorrow (rest of the week, until further notice)
    • Physics for STAR
    • CeC 16:00 - 20:00, flexible with the store end, dedicated
 
III. Items from shifts, last 24 hrs:
  •  Day Shift
    • Quite smooth data taking with few issues
    • Two subsequent incidences of daqman CRITICAL scDeamon.C:#1364 EQ1[trg] [0x8019] died/rebooted -
      fixed with eq1 power cycle and Reboot Trigger.
    • Two subsequent incidences ETOW: Errors in Crate IDs: 2 
      Recovered with Will's instructions: powercycle VME-94, reconfigure crates 1 and 2
    • sTGVC HV trip
    •  
  •  Evening Shift
    • Data taking continued quite smoothly, with few issues
    • Issue on startup iTPC sector 20 was dead in DAQ and we could not get it back either by rebooting in DAQ or by powercycling the RDOs in the sector. It was resolved by powercycling the crate in the DAQ room
    • eTOF was put back in towards the start of the fill, but would give the error: "ETOF has 1096>100 EVB errors. Asserting BUSY. Reconfigure detector!” It was taken out of the running.
    • Couple of trips in sTGC; both were cleared by DO hitting "clear trips" and then "full" on sTGC slow controls. This is now the correct procedure and is supposed to be included in the official instructions soon.
    • New iTPC masks: Sector 9, RDO 3, Sector 10, RDO 2
  •  Overnight Shift
    • Mostly smooth data taking with few issues
    • Problem with ESMD Communications. ESMD 03U. Fixed by power cycling Crate 91 and rebooting 
      between runs.
    • Had to mask iTPC Sectors S5-1 and S10-3, all fixed
IV.  ToDo:
  • Access today during CeC?  - fix of the trigger problems Tim, Christian on standby, controlled access, call Chris Perkins
  • Check if CeC intends to use their four hours, if so make sure that it matches beam dump and fill
  • Efforts to include ETOF in data taking shall continue. 
  • Print new manual for sTGC procedure - Prashanth
  • JH to request C-AD to reduce intensity in order to minimize backgrounds

04/08/2022 - Friday


 

I.   Summary of Operations:
  • Taking data continuously with few bumps  
  • Problem with 100% dead during the day shift, which cascaded because cycling power on trigger crates
  • See recommendations how to handle it: Simple stop/start new run should be sufficient, 
    Call expert before cycling power, which is the last resort
  • FCS EMcal problems diagnosed through online QA, two run delay in diagnosing. Fixed by Oleg by power cycling the HV 
  • Problem with sTCG Plane 1 Cable 6 tripped. Why did it take 13 minutes? Continuing to run while we fix this HV problem.
    Fixed by Prashanth during the run by manually setting the Set Point to 2900 V
  • Fri, Sat, Sun CeC 16:00 - 20:00 dedicated, flexible depending on the store end, can be used for access at STAR
II.  RHIC Schedule:
  • Today
    • Physics for STAR
    • CeC 16:00 - 20:00, flexible with the store end, dedicated
  • Tomorrow (rest of the week, until further notice)
    • Physics for STAR
    • CeC 16:00 - 20:00, flexible with the store end, dedicated
 
III. Items from shifts, last 24 hrs:
  •  Day Shift
    • Smooth running until 100% busy cascaded into a bigger problem by cycling power on trigger crates
    • TPX, iTPC 98% dead was a red herring. We power cycled FEEs before initially realizing BTOW was running hot and BHT3 was firing at 80 kHz and swamped everything else. Prepare for physics ended up being the resolution. 
    •  
  •  Evening Shift
    • Data taking continued quite smoothly, with few issues
    • Lost sTGC LV IOC at one point, recovered during TPC recovering from anode trip. .
    • Dave was called about MTD HV crate, which shut down and recovered by itself 
    • Several runs ended early due to 100% dead trigger.
  •  Overnight Shift
    • Mostly smooth data taking - there were small issues, see shift report from Steve
    • FCS EMcal problems diagnosed through online QA, two run delay in diagnosing. Fixed by Oleg
    • Delayed beam dump because of injector problems
IV.  ToDo:
  • Kolja recommends power cycling stgc rob#2 (caution message it will stay, goes away with power cycling) after the second warning about it. 
  • David’s recommendation: DOs no longer have to manually turn off sTGC LV ROB5, sections 3, 5, and 7 on slow controls. 
  • New trigger 100% dead handling  procedure from Hank and Jeff
  • Access today during CeC  - fix of the trigger problems Tim, Christian on standby, controlled access, call Chris Perkins
  • No more low luminosity run
  • New trip limit for sTGC to be established
  • Shift leader’s computer needs to be upgraded after this run





04/07/2022 -Thursday

 

I.   Summary of Operations:
 
  • Big power dip, which caused down time about 5 hrs 13:30 - 18:30
  • Polarization Blue 0.54, Yellow 0.47
  • One target in Blue lost and new one inserted, being conditioned
  • New mask rule in the STAR CR announced by JH and implemented
  • Shift leaders requested to make sure that ID badges are worn and those on shift have RHIC access cards
  • New L4 expert, update the expert list now Diyu Shen, see details below
 
II.  RHIC Schedule:
     - Today
       ~ Physics for STAR
     - Tomorrow (rest of the week, until further notice)
        ~ Physics for STAR 
 
III. Items from shifts, last 24 hrs:
     - Day Shift
       ~ Smooth running until 13:30 when power dip occured
       ~ Recovery took a lot of effort, which continued into next shift
           
     - Evening Shift
       ~ Data taking started at 18:40, STAR was ready before RHIC resumed operations
       ~ Mostly a smooth shift after 18:40, except for several runs that ended early due to various issues noted in the log. 
       ~ Dave was called about MTD HV crate, which shut down and recovered by itself
 
     - Overnight Shift
       ~ Mostly smooth data taking - there were small issues, see shift report from Steve
       ~ Polarimetry B1U target - remains offline for next store. Lost 1.5 hours between stores
       ~ Tonko fixed then unmasked a missing iTPC sector iS11-1
       ~ Problem with FEE PROM in iTPC S22-1.  Masked this sector from r23097019
 
IV.  ToDo:
     ~ BBC crate control, experts (Dave) should follow up, GUI problem? Canbus? Resolved by reboot of the main Canbus
     ~ Expert should look into L4 performance from last night, calibration issue? The two servers not working were restored 
     ~ New L4 expert, update the expert list now Diyu Shen Email: dyshen@fudan.edu.cn Cell: 9 011 86-15702127397
    


04/06/2022 - Wednesday

 

I.   Summary of Operations:
 
Running smoothly yesterday (day) and after CeC
Polarization Blue 50, Yellow 0.52 at the beta squeeze (Vincent 0.54)
Trigger 100% dead few times
 
II.  RHIC Schedule:
     - Today
       ~ Physics for STAR
       ~ 2hrs spin orientation measurement at the end of current store (STAR in safe state)
     - Tomorrow (rest of the week?)
        ~ Physics for STAR, NO CeC
 
III. Items from shifts, last 24 hrs:
     - Day Shift
       ~ Smooth running
       ~ An hour lost with BBC and Run Control issues
           
     - Evening Shift
       ~ CeC started ~ 5:45 
       ~ Restricted access at STAR
       ~ Christian and Tim replaced the original NW THUB with new MCU
       ~ Still no communications to NW THUB and Canbus.   
       ~ Cosmics and daily pedestal runs taken
       
     - Overnight Shift
       ~ CeC ended about 1:30
       ~ Physics began around 02:00.
       ~ Several runs with Trigger 100% dead and spontaneous run stops.
       ~ Problem at 04:00 with reboot of L4: L4Cal and L4Disp does not reboot.  
       ~ Removed them from node configuration. L4 plots seemed undisturbed, 
       ~ 05:20 Magnet trip.  
       ~ 06:50 Resume physics running  Lost about 1.5 hours of data-taking
 
 
IV.  ToDo:
     ~ BBC crate control, experts (Dave) should follow up, GUI problem? Canbus?
     ~ Expert should look into L4 performance from last night, calibration issue? Jeff to discuss with the new expert on call
     ~ Update expert on call to be Jeff
     ~ GMT gas supply need to be checked - not an issue now
     ~ Experts to review L2 pedestal shifts - wrong file was uploaded, fixed now
     ~ New mask rule - one more dedicated email Cc incoming crew, post a sign Prashantah, JH email to starops
     ~ Pink badges not needed to be worn, but shift crew must haves them in case they need to access - shift leaders to make sure
     ~ Everyone must wear his/hers  ID badge

04/05/2022 - Tuesday

 

I.  Summary of Operations:

  • Reduced AGS extraction currents, as requested by STAR, going well
  • CeC studies continue but their schedule may change, be flexible

II. RHIC Schedule

  • Today
    • Physics for STAR                    (Note change of schedule, next line)
    • CeC studies           16:35 – 00:35    (no beam, access is possible)
  • Tomorrow (Wednesday)
    • Physics for STAR 
    • No scheduled access.   Jet needs work but risk analysis suggests don’t touch it.
    • p-Carbon spin direction measurements – 2 hours at end of a fill,  time TBD.
  • Future
    • Thursday & Friday  CeC (Au Beam, 48 hours)   May be reduced to 2 shifts of 8 hrs, TBD, be flexible.

III.  Items from Shifts:

  • Day Shift
    • Smooth running (except unexpected beam abort at 12:30)
    • FCS HCal, Trigger 100% dead, EVB-08, Run Control, Tonko@work - fixed
    • Multiple sTGC problems. Now missing two sectors of plane 3
  • Evening Shift
    • Smooth running till ~6 PM. Beam dumped for CeC until 2:00 AM
    • STAR Access
      • TOF maintenance: replaced THUB-NW, CANBUS not working, replace microcontroller next access (today)
      • Prashanth & David checked sTGC reverse burn, no luck.  Stopped.
      • Akio loaded new HV file for FCS
    • Cosmics after 10 PM
  • Overnight Shift
    • New shift crew, overlap day
    • Cosmics till 2:00 AM, Smooth production running after 02:00
    • ETOW, JEVP restart, Tonko@work
    • MTD and TOF, and sTGC are not happy

IV. To Do:

  • Access on Tuesday afternoon (today); report to SL before entry
    • Access to start at 16:35 (Note change of schedule)
  • No maintenance access on Wednesday. Develop plan to accommodate this
  • Shift CeC studies to 4:35 PM, JH to negotiate, notify Ops list – Done
  • Quick entry to unplug TOF cable (?)… recommend wait till CeC access
  • TOF gas, Alexei and Geary to discuss
  • Shift Leaders, please call Prashanth if sTGC trips
  • Change sTGC HV and trip current … can be done between runs

     

04/04/2022 - Monday

I.  Summary of Operations:

  • RHIC performance ‘pretty good’ over the weekend, some AGS injection issues
  • AGS injection current lowered in response to lower effective cross-section used by STAR, generally good results
  • Jet polarization avg of 54% - not in full agreement with AGS numbers, experts reviewing the numbers
  • STAR took 18.5 hours of data on Saturday.  A new record for Run 22!  Sunday took 15 hours.
  • CAD plans to work on AGS issues behind CeC activity

II. RHIC Schedule

  • Today
    • Physics for STAR
    • CeC studies            16:00 – 00:00   (no beam, access is possible)
  • Tomorrow
    • Physics for STAR 
    • CeC studies            16:00 – 00:00   (no beam, access is possible)

III.  Items from Shifts:

  • Day Shift
    • Smooth running
    • Trigger 100% dead, VME crate 63 (MXQ), Tonko@work, sTGC, FCS, EQ2
    • Removed a few FCS triggers at 10:30 till end of fill
    • Access ~15:30, replaced LV power for splitter, TPC field cage currents - fixed
  • Evening Shift
    • Smooth running
    • David T. tuned a few issues around the control room
    • VME crate 75 (with scaler board) fan failure at ~20:45, Christian access – fixed
    • L2 pedestals shifted at 20:00, expert analysis needed
  • Overnight Shift
    • Smooth running
    • iTPC, TPX, Trigger 100% dead, Run Control, restart JEVP online plots

IV. To Do:

  • Access possible Monday afternoon (today); report to SL before entry
    • TOF maintenance: Replace THUB-NW.  Magnet barrier to come down.
    • Ramp TOF LV down. Tune TOF gas. Estimate two hours to replace THUB. 
    • Prashanth to cycle sTGC – check reverse burn, check HV cables
  • Another access possible on Tuesday afternoon.
    • Work with RHIC to give ~2 hours of overlap on Tues. afternon shift
  • Likely reduced access to STAR on Wednesday … develop maintenance plans accordingly
  • Experts to review L2 pedestal shifts

     

04/03/2022 - Sunday

I.  Summary of Operations:

  • Access today (Sunday) at 3:00 PM
  • CeC on Monday has been reduced to 8 hours, schedule TBD
  • Polarization mixed;  Hi 40’s – Lo 50’s depending on when you look

II. Yesterday's News

  • Lower effective pp cross-section (as seen by ZDCs) has been adopted across all platforms  (2.06 => 1.86)
  • As a result, the Figure of Merit plots will look 10% higher than previous editions of these plots
  • ZDC rates from MCR now to be 10% lower  ( 420 ± 100ish Avg  =>  380 ± 100ish Avg )

III. RHIC Schedule

  • Today
    • Access at 3:00 PM to fix FCS
    • Physics for STAR
  • Tomorrow
    • Monday April 4th       CeC for 8 hours    (no beams, access is possible)
    • Physics for STAR

IV.  Items from Shifts:

  • Day Shift
    • Smooth running
    • TPX, sTGC, Run Control Froze, TOF gas, FCS, Tonko@Work - fixed
  • Evening Shift
    • Smooth running
    • Online plots died, JPlots restarted
    • TOF LV and TPC FC IOCs, David@Work, BBC & BBQ – fixed
    • FCS needs work (lost Drell Yan and J/Psi triggers as a result)
  • Overnight Shift
    • Smooth running
    • FST, sTGC Plane 4 Prashanth@work, TOF – fixed
    • EMC check at begining of fill – no FCS events (?) 
    •     [This is a configuration error, take FCS out or fix configuration; Tonko and Jeff to talk and determine best shift leader actions.]
    • Event building 3 left in a few runs early in the shift (?)
    •     [If it runs, its OK.  It ran so it seems it was OK.]

V. To Do:

  • Access at 3:00 PM today to fix FCS;  fix Field Cage Currents
    • FCS pre-shower board needs work during the Sunday access
    • Shift Leader Contact  Oleg – Alexei – David – Tim  & Prashanth for Sunday access
    • Shift Leaders - take Drell Yan and J/Psi triggers out of the run until FCS is fixed
  • Another access is possible on Monday
    • TOF maintenance: repair or replace THUB-NW.  Magnet barrier needs to come down.
    • Prashanth to cycle sTGC on Sunday and/or Monday
    • Please report to the shift leader before entering the STAR hall.

       

04/02/2022 - Saturday

I.  Summary of Operations:

  • CeC on Monday has been reduced from 48 hours to 8 hours    (no beams on Monday, access is possible)
  • More CeC in our future, TBD
  • Polarization is good, overnight Hi 40’s, today Lo 50’s

II. Other News

  • Lower pp cross-section has been adopted across all platforms   (2.06 => 1.86)
  • Figure of Merit plots will look 10% higher than previous editions of these plots
  • ZDC rates from MCR now to be 10% lower  ( 420 ± 100ish Avg  =>  380 ± 100ish Avg )

III. RHIC Schedule

  • Today
    • Physics for STAR
  • Tomorrow
    • Physics for STAR            
  • Future
    • Monday April 4th       CeC for 8 hours    (schedule TBD, no beams, acesss is possible)

IV.  Items from Shifts:

  • Day Shift
    • Smooth running till Noon, CeC till 8 PM, some cosmics taken
    • Polarization high 40’s low 50’s
    • MTD, sTGC & gas, Tim&Christian@Work on THUB-complete
    • TOF out most of the day
    • Shift log stopped recording, disks full, Jeff@work - repaired
  • Evening Shift
    • Smooth running after 20:00
    • Polarization above 50%
    • iTPC RDOs, TOF trays, THUB masked/ quarter of detector out
    • TPC laser data taken with and without compensating resistor  (for TPC Calibrations in coordination with GVB)
  • Overnight Shift
    • Smooth running
    • Polarization Hi 40’s – Lo 50’s, mid 50’s most recent fill
    • sTGC, BTOW, DAQ watch program, Tonko@work

V. To Do:

  • TOF maintenance: repair or replace THUB-NW.  The Magnet barrier will need to come down.

     

04/01/2022 - Friday

I.  Summary of Operations:

  • Many strange phenomena at RHIC yesterday, odd day.
  • Ramp 33262 stopped ramp, many unusual issues, Vtx wide  (JH says good enough)
  • OPPIS work overnight & behind CeC – successful after a few false starts
  • Polarization in RHIC seems good, not sure if AGS numbers agree

II. RHIC Schedule

  • Today
    • Physics for STAR              00:00 – 12:00
    • CeC no beam                  12:00 – 20:00      (access to STAR possible)
    • Physics for STAR              after   ~21:00
  • Tomorrow
    • Physics for STAR            
  • Future
    • Monday April 4th             CeC for 48 hours   (CeC with Au beams, no access to STAR)

III.  Items from Shifts:

  • Day Shift
    • Beam dumped at ~08:30 for CeC
    • TOF, sTGC, Tonko@work, Christian-Prashanth-Oleg@work
    • CeC done at ~16:00 but start of odd problems in AGS & RHIC
  • Evening Shift
    • Vacuum Valve failure and other unusual issues @ RHIC
    • RHIC ready ~22:00 …Ramp delayed…steering for beam…long wait
    • Minor TOF LV issues in last run of the evening
    • Polarization numbers in the low-mid 50’s
  • Overnight Shift
    • Smooth running
    • TOF THUB, other TOF issues, Rongrong@work, TOF out of run
    • TPX, sTGC, EEMC, Polarimetry page & RunLog not updating
    • Reminder … Dump at Noon  (4/1/2022)

IV. To Do:

  • Call Jeff if Run Logs aren’t updating … even late at night
  • Additional details entered into the Shift Log would be useful as we come to the end of the run
  • Update shift procedures, TPC Short Ops manual w/GG & Laser coming soon
  • sTGC statistics for summer shutdown maintenance
  • reflash TOF THUB micro-controller, suggest mask off RDO-1 and run TOF today
  • ZDC cross-section discussion – JH to consider further.  Updates to CAD, Jamie & Vincent.

03/31/2022 - Thursday

I.  Summary of Operations:

  • OPPIS source work overnight & behind CeC
  • Beam dumped at ~08:30 for CeC
  • Strong geomagnetic storm today; may affect electric grid.

II. RHIC Schedule

  • Today
    • Physics for STAR              00:00 – 08:00
    • CeC no beam                  08:00 – 16:00      (access to STAR possible)
    • Physics for STAR              after   ~17:00
  • Tomorrow
    • Physics for STAR              00:00 –  08:00
    • CeC no RHIC beams        08:00 – 16:00      (access to STAR possible)
    • Physics for STAR              after   ~17:00
  • Future
    • Monday April 4th             CeC for 48 hours   (note impact on training shift crews)

III.  Items from Shifts:

  • Day Shift
    • Beam dumped for APEX (w/protons) early in shift
    • sTGC, access to online, Magnet trip, MXQ[trg] – fixed
    • Alexei and David accessed the hall, power cycled TPC Field Cage Current monitor
  • Evening Shift
    • APEX till ~22:45
    • iTPC & minor Trigger issue - fixed
    • new sTGC reference plot available
  • Overnight Shift
    • Smooth running
    • Polarization > 50%, only Blue shows up on web page
    • BTOW, Jevp plots, EMC, EEMC, FCS, sTGC, Laser GUI - fixed
    • Trigger 100% dead

IV. To Do:

  • Christian & team to make access
  • Additional detail in the eLog would be useful as we come to the end of the run
  • As always, try to maintain high shift efficiency – especially at the start of a fill
  • ZDC cross-section discussion – JH to consider further

03/30/2022 - Wednesday

I.  Summary of Operations:

  • Short stores, Blue lifetime low, AGS emittance high
  • Polarization still a bit low (high 40s), injectors?
  • Yellow abort kicker access & maintenance
  • Strong geomagnetic storm on Thursday may affect electric grid.  (minor W & F)

II. RHIC Schedule

  • Today
    • Physics for STAR              00:00 – 08:00
    • APEX studies w/proton   08:00 – 23:00
    • Physics for STAR              after   ~23:00
  • Tomorrow
    • Physics for STAR              00:00 –  08:00
    • CeC no RHIC beams        08:00 – 16:00      (access to STAR possible)
    • Physics for STAR              after   ~17:00
  • Future
    • Friday – CeC no beam     08:00-16:00         (access to STAR possible)
    • Monday April 4th             CeC for 48 hours   (note impact on training shift crews)

III.  Items from Shifts:

  • Day Shift
    • Smooth running till end-of-shift (Pol high 40’s)
    • FCS, TPX, BTOW, iTPC, TOF – ordinary issues
    • Trigger 100% dead
  • Evening Shift
    • Yellow abort kicker maintenance – no beam till ~20:00
    • TOF LV – disable and turn off tray 2, masked off
    • Relatively smooth running  (Pol high 40’s)
    • iTPC masked off  9-1, 16-3; other minor issues – fixed
    • Question – should we turn off FEE after masking off RDO’s?
  • Overnight Shift
    • Smooth running
    • Polarimetry page not updating
    • sTGC, FST, BTOW, TOF, TPC Field Cage lost comms, Tonko@work
    • Trigger 100% dead

IV. To Do:

  • Shift crews please add more detail regarding shift to eLog,  also record significant comms with MCR
  • Access required to fix TPC field cage current logger
  • Do not turn off FEE unless specifically asked to turn it off;  update to shift crew instructions needed
  • Take IV scan on day shift – Akio to call STAR Control Room

03/29/2022 - Tuesday

I.  Summary of Operations:

  • Yellow abort kicker, sextupole PS failure
  • Polarization dropping overnight (50's to 40's)

II. RHIC Schedule

  • Today
    • Physics for STAR
  • Tomorrow
    • Physics for STAR   00:00 - 08:00
    • APEX studies        08:00 - 23:00
    • Physics for STAR   after 23:00
  • Future
    • Thursday - CeC    08:00-16:00  (no RHIC beams)
    • Friday      - CeC    08:00-16:00  (no RHIC beams)
    • Monday April 4th – 48 hours for CeC (note impact on training shift crews)

III.  Items from Shifts:

  • Day Shift
    • CeC Development & end effects   08:00-17:30
    • Cosmics most of the day, eTOF work, FCS, Tonko@work, HLT code rolled back
    • Access by experts: Laser, sTGC HV reverse burn continues
  • Evening Shift
    • Smooth running after 17:30
    • TPX, BTOW, VME CPU Creighton5 (takes 3 minutes to boot, wait till complete)
    • Yellow abort kicker failure ~22:00, Sextupole PS failure same time, MCR access required
  • Overnight Shift
    • Cosmics till ~ 03:00, then regular running
    • TOF lv/fixed, BEMC hv/fixed, Jeff magic@midnight, Tonko@work
    • Trigger 100% dead

IV. To Do:

  • Can MCR work on pre-injectors during CeC?

03/28/2022 - Monday

I.  Summary of Operations:

  • Today: Booster access 07:00-Noon; CeC development 08:00-16:00 work on injectors
  • Booster cavity E6 repaired => 3 fills with low Pol, poor emittance, transverse position error, etc.

II. RHIC Schedule

  • Today
    • CeC development 08:00-16:00  (Magnets at flattop, no ion beam)
    • Restricted Access to STAR Hall possible until 15:30 (no access to tunnels)
    • STAR will take cosmics during CeC work
    • Physics for STAR evening and overnight
  • Tomorrow
    • Physics for STAR
  • Future
    • 16 hours of APEX on Wednesday (no access at STAR)

III.  Items from Shifts:

  • Day Shift
    • Smooth running, Pol low ~45%, beam position not ideal
    • Trigger 100% dead
    • JEVP server restart, TOF tray, power cycled scaler board 6 - fixed
  • Evening Shift
    • Smooth running, Pol ~50%
    • sTGC FEEs power cycled, EEMC crate power cycled
    • L4 calib table errors, L4 rebooted ~22:00, VTX distributions improved, diagnostics improved
  • Overnight Shift
    • Smooth running, Pol ~52%, beam position good
    • L4 error, TPX issues, FCS errors, RHIC monitors frozen/fixed, Tonko@work
    • sTGC chamber tripped, additional work today
    • ZDC rates ~550kHz new fill, L4 Vz histograms look good

IV. To Do:

  • Alexei to make access, Prashanth to make access, Reverse polarity burn – update?
  • Bill to walk through at Noon for property inventory
  • Akio to load new voltage tables (not an access)
  • RMS distributions on L4 undergoing review by experts

03/27/2022 - Sunday

I.  Summary of Operations:

  • Booster rf problem, no estimate, trouble shooting continues
  • Deer Management activities Evening and Overnight – call MCR if you have questions

II. RHIC Schedule

  • Today - Sunday
    • Physics for STAR
  • Tomorrow - Monday
    • CeC Development 08:00-16:00 (TBD)
    • Physics for STAR
  • Future
    • TBD on Monday Morning

III.  Items from Shifts:

  • Day Shift- Saturday
    • Boooster RF problem.  Held midnight fill till ~14:30, new fill ~16:30
    • Trigger 100% dead, screenshots saved
    • BBC crashed, lost comms to VME crates, SC3 crashed, etc.
    • Many good runs
  • Evening Shift- Saturday
    • Trigger 100% dead
    • EEMC Radstone board problem/fixed
    • Smooth running (but beam quality not so good)
  • Overnight Shift - Sunday
    • Trigger 100% dead, screen shots saved
    • Smooth running, but beam quality not ideal overnight, also current fill not ideal
    • Etow, sTGC, TRG, comms issues with MCR etc.
    • Discussion with MCR about when to dump beam due to issues at RHIC.  Held beam till 06:30, new fill ~07:30, JH suggests dump < 350 kHz

IV. To Do:

  • Consider low luminosity fill if low quality beam continues - beam shifted low - poor polarization
  • Scaler board 6 network issues - (last run day 83 run 61).  This impacts Lumi counting but board 5 is backup and should allow analysis to proceed.

03/26/2022 - Saturday

I.  Summary of Operations:

  • Deer Management activities Evening and Overnight – call MCR if you have questions
  • There still be daemons in RHIC ...

II. RHIC Schedule

  • Today - Saturday
    • Physics for STAR
  • Tomorrow - Sunday
    • Physics for STAR
  • Future
    • Monday: CeC development 08:00-16:00 (to be confirmed)
    • Deer Management activities Evening and Overnight till Monday morning

III.  Items from Shifts:

  • Day Shift- Friday
    • No beam all day
    • Tim worked on sTGC Blower.  Better?  (yes, so far)
    • Christian tuned RDO power, Tonko burned Proms, etc.
  • Evening Shift- Friday
    • Beams in RHIC after 10:00 PM
    • Smooth running
  • Overnight Shift - Saturday
    • Smooth running from RHIC
    • TPX & BTOW & TOF issues, EMC manual needs update re: alias for GUI
    • RF injector problem - not yet solved, no AGS beam, keep current beam till ~Noon

IV. To Do:

  • Summer activity: check sTGC blower

03/25/2022 - Friday

I.  Summary of Operations:

  • Deer Management activities Evening and Overnight – call MCR if you have questions
  • CAD worked on Yellow abort kicker
  • AGS calibrate pulse was found to be incorrect … explains (false) extraction losses
  • Polarization has slipped slightly in both rings (< 55%), work on injectors proposed
  • Last 3 fills for STAR were above Lumi target, but keep at this rate, tune later if necessary

II. RHIC Schedule

  • Today - Friday
    • p-Carbon spin studies 09:00-14:00 during Day shift (no beam for STAR)
    • Physics for STAR remainder of evening and night
  • Tomorrow - Saturday
    • Physics for STAR
  • Future
    • Monday: CeC development 08:00-16:00 (to be confirmed)
    • Deer Management activities Evening and Overnight till Monday morning

III.  Items from Shifts:

  • Day Shift- Thursday
    • Smooth running
    • TOF gas changed, TOF QA Plots funny/fixed, power cycle sTGC, etc.
    • Trigger 100% dead, screen shots taken and put in eLog
  • Evening Shift- Thursday
    • Smooth running
    • Trigger 100% dead, Hank took some screen shots
    • L2 died, TOF, EPD, BTOW online plots not working, L2 was out of run - fixed
  • Overnight Shift - Friday
    • Smooth running but ended with a double quench in yellow at 6:20 AM
    • sTGC minor issues/fixed, BEMC confusion/fixed, Tonko@work on RDOs, etc.
    • Trigger 100% dead, mystery continues, screen shots taken and put in eLog

IV. To Do:

  • Experts, please double check health of systems in preparation for a good weekend
  • Oleg to power up FCS radiation monitors

03/24/2022 - Thursday

I.  Summary of Operations:

  • CANCELLED:  CeC e-beam development (no RHIC beam)  08:00 – 16:00
  • Otherwise - no news is good news

II. RHIC Schedule

  • Today
    • Physics for STAR
  • Tomorrow
    • Friday: p-Carbon spin studies 09:00-13:00 during Day shift (no beam for STAR)
    • Physics for STAR remainder of Evening and Night
  • Future
    • Monday: CeC development 08:00-16:00 (to be confirmed)

III.  Items from Shifts:

  • Day Shift
    • STAR cooling tower was cleaned and put back into operations
    • Long access – many experts at work (e.g. TOF new HV board, sTGC blower bad sensor)
    • TOF HV glitch, repaired.  Some issues with default config needing update by DO
    • sTGC communications issues … alarm handler will have to be rebooted to fix the problem.  Shift crew can do this.
  • Evening Shift
    • Late start ... but no major issues
    • Will Jacobs worked on ETOW hot tower issue; crate masked but still shows as hot channel in QA plots
    • Smooth running after 10:45 PM, minor issues. (e.g.  Frozen online histograms fixed)
  • Overnight Shift
    • Smooth running most of shift
    • Shift Leader computer was erratic, fixed itself.
    • Lost communications with alarm handler, fixed itself.
    • BTOW PMT 10 empty bins, required cycling HV.
    • FCS issues, power cycle MPOD crate by expert
    • MTD had multiple errors on several occasions (54)
    • Trigger 100% dead a few times

IV. To Do:

  • sTGC blower sensor – action plan?
  • Work will continue on Trigger 100% dead issue; suggest reboot trigger whenever it happens
  • Continue with negative voltage burn-in

03/23/2022 - Wednesday

I.  Summary of Operations:

  • Relatively smooth data taking since yesterday; last 5 fills have lasted natural lifetime
  • Siemens performance has been good for past 2 weeks, so confidence is growing that OK
  • Luminosity a bit low, RHIC Ops hopes this will improve after today’s access (e.g. they will work on chromaticity tuning)

II. RHIC Schedule

  • Today
    • AGS Access started at 07:00, RHIC Beam dumped 09:00 (on schedule)
    • AGS Access scheduled to complete by 14:00 so RHIC Beams by 15:00
    • This may slip but hopefully no later that 17:00 for RHIC beams
    • Physics for STAR remainder of Evening and Night
  • Tomorrow
    • CANCELLED: CeC e-beam development (no RHIC beam)  08:00 – 16:00
    • Physics for STAR remainder of Evening and Night
  • Future
    • Friday: p-Carbon spin studies 09:00-13:00 during Day shift (no beam for STAR)
    • Monday: CeC development 08:00-16:00 (To be confirmed)

III.  Items from Shifts:

  • Day Shift
    • BEMC glitches
    • High bin at 97 in cr2Hot QA Plot for EEMC
    • STGC air blower temp not stable – sets off annoying alarm. (SlowC or STGC?)
    • Scaler rate for J/Psi were high in the middle of the day (not a continuing problem)
  • Evening Shift
    • Smooth running
  • Overnight Shift
    • High bin at 97 in cr2Hot QA Plot for EEMC – hot channel? (Will@work)
    • Long list of iTPC & TPX RDOs masked off  (~dozen),  could not unmask.
      (consultation with Tonko suggested before unmasking RDOs)
    • iTPC[9] prevented Run Stop.  Powered Down node in DAQ room & restart.
      (in general, contact Tonko or Jeff before power cycle nodes).
    • STGC air blower temp still unstable

Period coordinator change: Sooraj Radhakrishnan --> Jim Thomas

03/22/2022

I. Summary of operations:

  • Good stable fills all through yesterday
  • Vernier scans taken ~4.00 pm
  • Updates on Trigger 100% dead issue ---- Experts looking into, some cases issue is from tokens lost, but not from busy detectors
  • Maintenance day tomorrow. STAR Magnet off ~10.00 am
  • Please inform on tasks/to do/plans. So far:
    • Cleaning of magnet cooling tower
    • Time for Tonko to work on TPC
    • Access for Prashanth+David (stgc cable reconnection)
    • Access for Xu (~hr) coolant for FST
    • Access for Alexei for TPC survey
    • Access for Oleg/Akio install radiation censors in front of FCS
    • TOF Rack 1C1 need to checked if cooling fans are working correctly (Alexei)

II. RHIC Schedule:

  • Today:
    • STAR Physics
  • Tomorrow:
    • 07.00 - 17.00 :: Maintenance (RHIC from 9:00, Access at AGS behind store from 07.00), IP2 close at 14:00

III. Items from shifts:

  • Evening shift:
    • Masked out iTPC Sector 5 RDO 1
    • New voltage files are loaded for both FCS Ecal and Hcal. Rates looks good. Production Ids for all FCS production triggers were incremented AFTER this run
    • For first time TOF noise seems to have shown up in production running. Also seems to be on BTOW TP in the same region (fuzzy TP pedestals and enhanced response around TP = 100) ---- origin of issue not clear. Jeff will look for large events in cosmic runs and if so could reconstruct and look into further
  • Overnight shift:
    • Issue with FST and ETOW. FST Critical Recovery failed for RDO(s): 4. EEMC lost communication
    • DOs turned the FST off and back on that fixed FST, back in next run. ETOW out for a few runs. DOs recovered ETOW following manual. Details on shift log
    • Jpsi*HTTP Sca Hz, upcJpsiSN Sca Hz was a little to moderately high, noted from Run 23081008 ---- Do 'Prepare for Physics' on BEMC computer if this happens
    • STGC computer in control room has completely frozen. Switched to a different tty (using Ctrl+Alt+F3) and came back into the original tty (using Ctrl+Alt+F1) to fix ---- David will follow up
    • Because the STGC wasn't in a safe state before beam dump, missed the opprotunity to take the zdcPolarimetry_2022 run
    • Gas alarm for STGC for PT-1. It is rising and is now at 24
    • TOF LV East Sec 5 U5 temperature alarm of 42 (orange)

03/21/2022

I. Summary of operations:

  • Unexpected beam loss 3 times. Some downtime bcos of this. Apparently still Yellow abort kicker issues
  • Trigger going 100% dead a few times and needing to restart runs ---- Trigger experts are looking into the issue
  • 8 hr maintenance day on Wednesday. Please inform on plans/tasks to do/requirements

II. RHIC Schedule:

  • Today:
    • STAR Physics. Access for CeC taken, back to STAR
    • Vernier scans possibly today
  • Tomorrow:
    • STAR Physics

III. Items from shifts:

  • Day shift:
    • FCS "FEE count 44, expected 55 -- restart run for two runs in a row. Contacted Akio
    • Shift crew pointed out extra spikes in FST Non-ZS noise vs channel for disk
    • Node 0xf001Is forcing the run to stop because: 494|rc_handler.c|L2 stopping run. Looks like BCW was the problem. Rebooted trigger after run stopped and starting a new run
  • Evening shift:
    • Under ETOW cr2Hot shows hot tower at ~97. Earlier this week (day 77) this hot tower was gone
    • TPX and iTPC are regularly go 100% dead - probably 5-10x a run. Auto-recovery and auto powercylcing works 99.99% of the time, but it still seems like a lot of autorecoveries(?) ---- Tonko informs the occurrence is in fact less frequent for the past 5 days
  • Overnight shift:
    • New spike in ETOW->Hot_Tower cr2, right next to the other spike
    • stgc pt-1 pressure just went above minor threshold. From online plot looks like a slow but steady trend upwards over last day or two
    • TOF HV was ramped to standby after unexpected beam abort, but DO noticed voltages stayed at full values (7k). Online status viewer confirms that TOF HV still at full. DOs working through restart TOF HV IOC procedure. ---- Call experts if happens

03/20/2022

I. Summary of operations:

  • Got beams for STAR all yesterday. A few unexpected beam losses. Might be an access to work on some problems related to it at end of this fill
  • No vernier scans taken
  • Some recurring issues with ETOW during the shifts
  • Trigger goes dead 100% several times. Dylan has put in a Python script on the SL computer to monitor to watch DAQ Monitor webpage and play sound if trigger is dead

II. RHIC Schedule:

  • Today:
    • STAR Physics
  • Tomorrow:
    • STAR Physics

III. Items from shifts:

  • Day shift:
    • EEMC crate issues, dont know the cause yet. Happened a few times, see below for other instances in all the shifts. Taken out of run and powercycled relevant crates and vme crates to fix
    • Caution message: "ETOW: failed crate Ids: 4". QA plot ETOW Hot_Tower cr4Hot mostly empty. EEMC GUI Tower FEE Crate 4 showed "Ready" light red. Most FEE crate voltages were in red. Monitoring plots related to that crate were nearly empty. After consulting the expert power cycled the corresponding CANBUS crate which restored the FEE crate voltages. After a master reload everything seems back to normal
    • Run would not start due to TPX 23-4 and 23-5 failed configuration. Masked out TPX 23-5
    • New sTGC caution messages:"2: FOB 0 (R#10:F#55) NOT found" and "2: FOB 5 (R#10:F#60) NOT found" (since changes David made yesterday, we always get "1: FOB 5 (R#05:F#30) NOT found", but these two are new). QA plots look fine ---- Prashanth will look into and ask Tonko to suppress these messages
    • FCS[1] stuck configuring and run could not start, power cycled fees twice to fix
  • Evening shift:
    • ETOW: configuration failed -- watch ETOW triggers or restart run. Crates 3 and 4 are off and no tower response. Tried powercycling crates and following instructions for reconfigure but that didn't work. Cycled Crate 90 and started over and that worked
    • EEMC Crates 1&2 and MAPMT sectors 12-3 are bad. Monitoring plots show missing data. Problem with EEMC in last run was somehow HV was not applied to the cycled crates, even though they were all green and looked ok. Will fixed it but noticed there was an error with Crate 3 configuration
    • Monitoring plots interface not updating, restarted servers
  • Overnight shift:
    • ETOW Hot_Tower spike in cr2 appears to be back, also new spike in cr4Critical_Shift_Plots->ETOW->Tower_ADC for cr2. We power cycled tower FEE cr1&cr2. Problem is resolved
    • Many fcs trigger rates are high (red). Stopped run and rebooted fcs from run control. Fixed
    • prom_check: trg + daq + itpc: “Incorrect Trigger clock 10.000 MHz -- restart run" --- not important, can be ignore. Also, prom_check can be taken with beam

03/19/2022

I. Summary of operations:

  • Back from APEX around 12.00, but unexpected beam loss. Long downtime, beam back around 12.30 am. Did cosmic runs during downtime
  • No vernier scan last night, might do later today

II. RHIC Schedule:

  • Today:
    • STAR Physics, Vernier scan
  • Tomorrow:
    • STAR Physics

III. Items from shifts:

  • Day shift:
    • West laser failed. Short access for Alexei to fix
    • Prashanth increased pressue PT1 as we were getting low alarms
    • David took out sTGC ROB #5 channels 3, 5, and 7. These should be manually removed when ramping up LV
    • Updated run control handler for support for new prom_burn and fcs_ivscan runs (Jeff)
  • Evening shift:
    • Cosmic run: Noise in TOF and BEMC JP is back
  • Overnight shift:
    • ETOW Hot_Tower spike in cr2 appears to be back, also new spike in cr4

03/18/2022
I. Summary of operations:

  • Got one fill overnight. Unexpected beam loss just after fill after that. Yellow abort kicker issues during injection. MCR decided to go into pC spin direction measurements early, ~6.00 am
  • Shift was smooth, no major issues

II. RHIC Schedule:

  • Today:
    • 08.00 - 12.00 :: pC spin direction measurements. STAR Physics otherwise
    • Vernier scan during store after pC spin measurements
  • Tomorrow:
    • STAR Physics

III. Items from shifts:

  • Evening shift:
    • FST error code during pedestal. Disk1 module 12 alternate sensor current was fluctuating called Xu and he fixed. He will raise limit on this module so it doesn't happen again
    • Masked out iTPC S13:2
    • iTPC and TPX 100% dead during start of run, power cycled FEEs to fix. ---- likely from masked out RDO S13:2 in error state, power cycling fees was right way to fix
  • Overnight shift:
    • Unmasked itpc S13-2 after instructions from Tonko

03/17/2022
I. Summary of operations:

  • Dedicated time for APEX and CeC. Not much activity at STAR. Took btow_ht: trg + daq + btow every two hours

II. RHIC Schedule:

  • Today:
    • CeC till 16.00, STAR Physics afterwards
  • Tomorrow:
    • 08.00 - 12.00 :: pC spin direction measurements. STAR Physics otherwise

III. Items from shifts:

  • Day shift:
    • TOF HV was turned to standby, but voltages not ramping down. DOs going through HVIOC restart procedure. Connection appears to be back, but gui does not seem to be responding still (voltages will not ramp up when HV turned on). David power cycled TOF HV crate, which fixed issue
    • Unmasked iTPC S13-2 and S20-4 following Tonko's instructions
    • Some delay in ramping up the magnet, was unable to turn on the main power supply

03/16/2022
I. Summary of operations:

  • Access till ~12.00 to fix sextupole issues. Beam back around 1.00 pm
  • Power dip and magnet trip at STAR ~4.00 pm. Lost connection with trigger detectors, TPC FEE GUI went completely white. Fixed by David. Run could not be stopped in run control. Needed to turn power back on to MIX, BE, EQ2, and LOL2 VMEs following instructions from Jeff. STAR back ~5.00 pm. Got next fill at 1.00 am

II. RHIC Schedule:

  • Today:
    • APEX till 16.00 (STAR Magnet is off, Power supply repair behind APEX); CeC afterwards
  • Tomorrow:
    • CeC till 16.00, STAR Physics afterwards

III. Items from shifts:

  • Day shift:
    • Unmasked iTPC S2-3, 10-4, and 6-4 at Tonko's request
    • Updated fcs offline trigger ids to match fast offlines organization scheme (Jeff)
    • In shift critical plots, observe peaks in TOF ADC plots that were not present in previous cosmic data. Experts notified, but they don’t know the reason
    • Spike in BEMC DSM L0 Input - PatchSum plot. Oleg asked us to take a btow_ht run to debug, all looks ok
    • Online shift plots not updating so restarted JEVP server
  • Evening shift:
    • pedestal_rhicclock_clean: trg + daq + fcs runs correctly but DAQ monitor but BCE gives errors. CRC Error on STP Build header. Event 1000, Token 1000, nPre 0, nPost 0
    • Cosmic runs: The unusual activity is back in the TOF and BEMC
  • Overnight shift:
    • TOF THUBNW now staying at yellow alarm with terminal voltage 5.9, should be fine

03/15/2022
I. Summary of operations:

  • Much smoother fills and shifts than yesterday
  • Faster loss of intensity in the last fill, yellow beam has issues. Yellow sextupole issue
  • Possible access for 2 hours from 10.00 am (work on sextupole)

II. RHIC Schedule:

  • Today:
    • 09.00 - :: Fix RF (2 hrs behind the store)
    • STAR Physics afterwards
  • Tomorrow:
    • 08.00 - 16.00 :: APEX
    • 16.00 - :: CeC dedicated time

III. Items from shifts:

  • Day shift:
    • GMT was checked to be ok after the gas bottle change yesterday
    • "Reboot" Button for LV on sTGC sometimes does not complete correctly. Got the same error last week. It says "starting socket... / sudu: command not found” --- David will look into
    • High Sca Hz rates for BHT2*BBCTAC and BHT3 and JP2. iTPC and TPX at 100% dead and would not recover. If the Sca Hz for BHT2*BBCTAC and BHT3 are high, causing the iTPC and TPX to be 100% dead, click "perpare for physics" on the BEMC computer. Wait for terminal to say "Finished checking FEEs on all SMD Crates" ---- would be good idea to do "perpare for physics" on the BEMC computer between the fills
    • Masked out iTPC S2-3
    • The TOF PT-1 gas ran out, so we switched the valves
  • Evening shift:
    • ETOW lost communication. Took out of run. Power cycled crate 90 (controls) and reconfigure all (from Will)
    • iTPC RDO 10-4 was masked out
  • Overnight shift:
    • Shift crew noticed hole in BTOW ADC plot from runs 19 and 21. DOs determined it to be from crate 0x14 and the run was stopped. DOs reapplied HV

03/14/2022
I. Summary of operations:

  • Unexpected beam loss 3 times, TOF and MTD HV alarms during beam loss
  • Faster loss of intensity after beta squeeze during last night’s run. Yellow sextupole failure, fixed now, should be better from the current fill
  • A few issues with MTD, EEMC, Trigger during the shifts

II. RHIC Schedule:

  • Today:
    • STAR Physics
  • Tomorrow:
    • STAR Physics

III. Items from shifts:

  • Day shift:
    • High Err numbers in many of the triggers on DAQ monitoring web page during one run (Run 23072031), stopped and restarted
    • dimuon trigger Sca Hz is red and sitting around 900 Hz with a Sca Dead of 18%,.The Daq Hz is low, (during Run 23072035)
    • EHT0*BBCTAC trigger had a high Sca Hz rate of 2500. EHT1 trigger had a high Sca Hz rate of 140 (during Run 23072044) --- could be from beam issues
  • Evening shift:
    • While updating L2 pedestal reference for run23072042, L2 trigger crashed. Jeff asked Navagyan to restart L2 trigger and update the L2 pedestal reference for run23072043
    • Sudden beam loss and saw MTD low HV red alarms (value ~4600) on some sectors
  • Overnight shift:
    • dimuon trigger Sca Hz is red and sitting around 1600 with a Sca Dead of 25%. Shift crew observed hot spot in MTD critical strips vs BL plot between 100-120 and 4-8 (Run 23073002+)
    • got TOF LV error (NW THUB current is red) so power cycling LV. Also itpc[19] preventing run from starting. Contacted Tonko for assistance ---- needed powercycling of one FEE causing issues
    • GMT gas bottle changed, but adjusted to have pressure at 14 psi. Expert might want to look. ---- Alexei will take a look. Shift crew are not supposed to alter anything than change the bottles
    • Unexpected beam loss and got TOF and MTD HV alarm
    • Temperature for crate 1 is at 0 on ETOW GUI. ETOW QA plots monitored closely but no issues seen. Continued runs with this caution messages. Expert instructed same and power cycle the crate and then issue a master reload at next beam dump
  • Day shift (Today):
    • TPC high dead times, Powercycling fees didn’t help. From last pedestal run not completing ---- If the pedestal run doesnt complete and keep requesting more events, call the expert

03/13/2022
I. Summary of operations:

  • Magnet trip around 5.30 pm, same issue as last Thursday. Couldn’t be ramped back till today morning
  • Took zero field alignment runs with low luminosity beams
  • Magnet ramped back around 8.00 am. At next opportune time possible, a more permanent fix to the circuit causing the issue will be done
  • Online status viewer not loading ---- Dimitri was called to fix

II. RHIC Schedule:

  • Today:
    • STAR Physics
  • Tomorrow:
    • STAR Physics

III. Items from shifts:

  • Day shift:
    • BERT was saying we had Physics On for 5min after a beam dump ---- Communication issue, didnt repeat in later shifts
    • Took access for laser and interlock communication issue
    • Piezodrive froze and I came to WAH and restarted by AC power flipping — Alexei
    • Corrupt Trigger Data Event : Token 2213, Node qt32d-3 (Crate=5, Slot=6) (TCU Event Ctr 0x7f8b3343, Node Readout Ctr 0x7eb4aff2), similar msg in the next run also ---- not an issue
    • Noticed the Sca Hz associated with the FCS were high. Stoped Run. Rebooted FCS from run control
  • Evening shift:
    • Made a controlled access and power cycled the crate #52 manually and interlock monitoring is working again — David
  • Overnight shift:
    • "TOF: too many trays (111>5) require power cycle -- consider TOF/MTD CANBUS reset!" so will restart CANBUS
    • Unknown alarm at ~4 AM heard by entire shift crew, lasted only about 1 second. High-pitched train whistle like sound (not very loud) from area of gas/interlock alarm

03/12/2022
I. Summary of operations:

  • Beam conditions are very good, peak luminosity touching around 600 KHz. ---- will keep watching, and will ask MCR to trim down a bit if needed
  • Issue with the abort kicker, few hours till next fill during day shift
  • Shifts were smooth, no major issues

II. RHIC Schedule:

  • Today:
    • STAR Physics
  • Tomorrow:
    • STAR Physics
  • Run officially extended by 2 weeks!

III. Items from shifts:

  • Day shift:
    • Regained communication with FCS with help from David
    • Shift crew noted that the wide sTGC Diagonal Strips Plane 3 QA plot has been varying in intensity since run 23070017. STGC HV looks okay on GUI for Plane 3 ----- Powercycle LV to fix
    • Jeff made an update to the JEVP plots attempting to stop the crashes that have been happening periodically
  • Evening shift:
    • iTPC: RDO S13:2 was masked out
    • The interlock alarm handler appeared white. Expert said that there seems to be some issues with Canbus 52, he was unable to bring the power back up, and suspects that it is a fan failure. Ignore and keep running for now, the electronics expert will try to go in and fix it at the next access ----- short access expected, so will take access at the end of this fill
  • Overnight shift:
    • Gaps in sTGC hits/VMM, powercycled LV to fix
    • The laser cameras do not change when steering buttons are pressed. The laser is very poor for this run. ----- Alexei will check after the meeting

03/11/2022
I. Summary of operations:

  • Beam conditions are very good after the switch to Siemens MG. Better luminosity, peak for last fill at 600 KHz, may want to reduce depending on STAR's preference. Better beam polarization (55% for both beams as per 9 am meeting)
  • Downtimes due to STAR Magnet trip and Yellow abort kicker issues
  • Magnet ramped back ~3.30 pm. Beam back around 4.00 pm, unexpected beam loss around 11.00 pm
  • Shifts were smooth, some issues for FCS and ETOW

II. RHIC Schedule:

  • Today:
    • STAR Physics (after the magnet is ramped back)
  • Tomorrow:
    • STAR Physics
  • CeC scheduled to have dedicated 4 days (not continuous) for run till April 4, starting March 21

III. Items from shifts:

  • Day shift:
    • Oleg and Bill put ecal block on the east side
    • FST had a failure code 2, out of runs. FST Disk3 Module1 inner sensor shows a significantly higher current than other channels. seems like the inner HV cable of D3M1 broke. A 1 hour access is needed to investigate the cable issue. Currently running D3M1 with 120V and the current is around ~25muA. ---- Expert wanta an access, but not urgent, when available
  • Evening shift:
    • iTPC RDO 14-4 masked out
    • ETOW out of 3 runs bcos of lost connection. Took a while with help from expert to bring back
  • Overnight shift:
    • L3 event display only shows lasers on north side and none on south. During evening shift: laser was left on for longer than usual (>20min). Laser power was suddenly lost and the run was stopped ---- Likely from DOs not tuning correctly
    • Unmasked iTPC S4-2 and S16-4
    • There is a silenced IOC error for FCS LV (IOC_Monitor->forward-cr->FCS,FCS_LV). Online plots for FCS are normal. ---- Jarda will look into

03/10/2022
I. Summary of operations:

  • Beams through all shifts for STAR physics
  • Three vernier scans during day shift, took data
  • Luminosity comparable to before, above 400 KHz for ZDCAnd_nokiller. Unexpected beam loss around 7.00 am today
  • STAR magnet trip ~6.45 am. CAS tried to bring the magnet up three times without success. Trying to ramp again now.

II. RHIC Schedule:

  • Today:
    • STAR Physics (after the magnet is ramped back)
  • Tomorrow:
    • STAR Physics

III. Items from shifts:

  • Day shift:
    • Lost a HV board in the STGC. STGC and MTD out of run on expert recommendation. Back in after two runs
    • Stopped the run due to high rates in triggers associated with FCS. Called Akio and DO power cycled the LV
    • BTOW Pmt Box 10 had empty histogram entries. Cycled the HV of PMT box 10 after run stop. Looking back at the QA histograms, that PMT box had missing entries since run 27 today
    • Needed to reboot L4 to stop run for few runs --- likely from scheduled disc checks on Wednesday, shouldnt happen again
  • Evening shift:
    • Laser run: The QA plot of TPC drift velocity is empty ---- from QA plots not updating and needing to reboot jevp servers
    • ITPC: Sector 4, RDO 2 masked out
    • QA plots were not updating; read through the manual for it, and followed the instructions for both the "SHIFT" and "L4" server restart processes
  • Overnight shift:
    • Couple of time stgc QA plots showed missing hits, power cycled LV to bring back
    • Jeff adjusted trigger rates to minimize deadtime

03/09/2022
I. Summary of operations:

  • MCR did switch from Westinghouse to Siemens MG. Went well. RHIC injection polarization ~61%
  • Extended the store from morning till 6.00 pm. Yellow abort kicker issues after the beam dump, fixed and beam back around 3 am
  • Luminosity comparable to before, unexpected beam loss around 7.00 am today
  • Very smooth shifts. No major issues

II. RHIC Schedule:

  • Today:
    • STAR Physics
  • Tomorrow:
    • STAR Physics

III. Items from shifts:

  • Day shift:
    • Persistent FST failure with error code 2. Taken out of runs. Included back in runs after expert changing FST current trip limit and alarm limit from 20 to 30 muA
    • Jeff made rebalancing of trigger rates in production_pp500_2022 ---- Jeff will watch data rates and adjust if there is much deadtime
  • Overnight shift:
    • Vernier scan at start of store ---- MCR will be asked to let STAR know in advance of Vernier scan plans/schedule
    • Multiple emcvme1_u5val low alarms, looked at GUI and everything turned white. Called Oleg who fixed the issue

03/08/2022
I. Summary of operations:

  • Beams for STAR physics all day without much downtime
  • Extended the last store yesterday till today morning to start a new fill ~7.00 am
  • A few issues with BTOW, L2 error, but overall smooth runs

II. RHIC Schedule:

  • Today:
    • Siemens MG Changeover (behind long store for STAR): Change to Siemens (7-11), Hold store until ready to re-inject (~19:00)
  • Tomorrow:
    • STAR Physics

III. Items from shifts:

  • Day shift:
    • Trigger went 100% dead - due to high rates for FCS triggers. Couldn’t start runs due to FCS error "FEE count xx, expected 55”. DOs powercycled FEEs with help from Oleg
    • Got BTOW crate 15 failed to configure a few times at start of run. BTOW so that crate 15 changed to manual configuration by Oleg
    • Akio loaded new FCS tower gains, took test run near end of fill
  • Evening shift:
    • iTPC S12:1 masked out
    • Much of BTOW is missing in critical plots. DOs called the expert: take it out of the run and power cycle LV, reapplied HV
    • Stopped the run: BERT unexpectedly says "Physics OFF" but there's still beam in RHIC. Seems a connection issue
  • Overnight shift:
    • New missing hits in sTGC (hits/VMM) and sTGC (hits/Fob). Back in the new fill today morning ---- If the power cycling ROB didn't help, please ask the crew to power cycle entire LV. It seems crew power cycled wrong ROB (from Prasanth)
    • Error message mid run 23067012 that BC1 timed out, "L2 stopping run”. On trigger component tree BC1 was red, TCD flashing red. TCD sys-reset and powercycling TCD VME crate didn’t help. Tonko fixed remotely ---- Separate issues from BC1 and TCD. TCD issue from Tonko updating the codes. For BC1 issue, powercycle the BC1 crate
    • iTPC s12-1 unmasked

03/07/2022
I. Summary of operations:

  • Got beams for STAR physics all day without much downtime
  • Unexpected beam abort two out of three times
  • Some issues with TPC, BTOW, ETOW, but nothing major during shifts

II. RHIC Schedule:

  • Today:
    • STAR Physics
  • Tomorrow:
    • STAR Physics, Change to Siemens MG behind a long RHIC store

III. Items from shifts:

  • Day shift:
    • ETOW and ESMD out for 2 runs due to configuration problem with ETOW crate 2. Back after powercycling of all crates and reconfigure
    • iTPC sector 9, RDO 1 masked after three failures
    • BTOW crate 15 required restart of run couple of times, also twice during evening shift ---- for crate 15, just restart the run
  • Evening shift:
    • Several runs had to be stopped due to various iTPC RDOs, not necessarily consecutively: S14:1(x2), S13:2(x3), S12:1(x2), S12:3
    • iTPC RDO S13:2 is masked out
  • Overnight shift:
    • iTPC: sector 12, RDO 1 masked
    • FCS started to show this error: FEE count 13, expected 26. This was fixed by Oleg after powercycling FEEs
    • Critical plots for SHIFT not being updated, JEVP server needed to be restarted
    • ETOW: configuration failed, DOs found that crates 5 and 6 were red, out of one run. Rebooted and reconfigured and back in run
    • BTOW: configuration failed -- watch BTOW triggers or restart run. We restarted the run and BTOW trigger were firing at a too high rate. Reconfigured

03/06/2022
I. Summary of operations:

  • STAR Physics through yesterday, not much interruptions
  • FCS: DEP10:2 failed, was taken off runs from ~8.30 pm. Fixed and back in ~6.00 am

II. RHIC Schedule:

  • Today:
    • STAR Physics
  • Tomorrow:
    • STAR Physics

III. Items from shifts:

  • Day shift:
    • Overall smooth shift, with a power-cycle of an iTPC RDO, and a few ROB's (often #15) in the sTGC every once and a while
    • iTPC sector 3, RDO 3 masked
    • Incorrect number of events shown on RunMarker for several runs. Issue of monitoring programs not updating properly, but files were sent to sdcc fine
  • Evening shift:
    • btow crate 23 removed from automatic configuration, it is on manual control now (Oleg)
  • Overnight shift:
    • sTGC (hits/VMM) plot shows a line that appeared from run# 23065006 (see shift log)
    • FCS fixed and included back in run. Masked RDOs iTPC sector 3, RDO 3 and iTPC sector 4 and RDO 2) also fixed by Tonko ---- Akio will follow up with Tonko so that there is more than one expert who can be contacted to fix the FCS issue

03/05/2022
I. Summary of operations:

  • 56 x 56 bunches low luminosity run for alignment yesterday. STAR magnet was OFF. Got 6 hrs, till 8 pm
  • CeC took access for 2 hrs. Back to high luminosity run ~11.30 pm
  • Lost beam due to Quench Link Interlock around 3.00 am. MCR just got back a new fill
  • Shifts were smooth, no major issues

II. RHIC Schedule:

  • Today:
    • STAR Physics
  • Tomorrow:
    • STAR Physics

III. Items from shifts:

  • Overnight shift:
    • Masked TPC: Sector 4, RDO 2 ----- Tonko will look into
  • Day shift (Today):
    • At start of run, got Caution messages about BTOW crate 23 (crate failed, config failed). Trigger scaler rates all look okay, except for Jpsi*HTTP which is ~400 Hz (expect 50) ------- Indicates misconfiguration of the crate, not to be ignored. Oleg will put this crate under manual configuration

03/04/2022
I. Summary of operations:

  • AGS chiller work not finished yesterday. So was decided to provide fill for STAR and work behind it today
  • Got beam around 04.00 am today (lost around 09.30 am), high intensity fill
  • Shifts were smooth, no major issues

II. RHIC Schedule:

  • Today:
    • No APEX today. 56 x 56 bunches low lumi fill for alignment run. STAR magnet off. 6 hr fill expected
  • Tomorrow:
    • STAR Physics

III. Items from shifts:

  • Overnight shift:
    • sTGC out of one run due to communication loss. DOs fixed the problem by powercycling the crate 1 and bringing back the LV modules. Happened once also on todays Day shift ----- No need to take sTGC out of run, fix and include back in the run
    • Getting messages ‘TPX TDBm: error setting up -- TPX might not work! ’. Been happening for all runs since ~05.30 am. DAQ and online plots for TPC looks ok ------- likely communication issue, need to powercycle TDM board. Tonko will follow up with shift

03/03/2022
I. Summary of operations:

  • Low luminosity run yesterday night with vernier scans at beginning and end of fill
  • MCR has injector issues, working on fixing those now. STAR is open for access. MCR expects at least 6 hour down time, possibly longer
  • Shifts were smooth, no major issues

II. RHIC Schedule:

  • Today:
    • STAR Physics (after beam is back)
  • Tomorrow:
    • 08:00-14:00 APEX
    • 14:00-16:00 CeC Access
    • 16:00 - STAR Physics

III. Items from shifts:

  • Smooth shifts, no major issues

03/02/2022
I. Summary of operations:

  • Generally smooth data taking, but some issues with TPC
  • Fills are lower luminosity (peak 400 kHz), mostly correlated with blue intensity
  • Took runs with modified DX angle and low luminosity run with missteered beam at end of fill last night

II. RHIC Schedule:

  • Today:
    • 7:00-11:30 CeC access
    • 12:00-17:00 APEX
    • 17:00 - STAR low lumi. fill
  • Tomorrow:
    • STAR Physics

III. Items from shifts:

  • Day shift: Got beam around 1.15 pm. Smooth shift
    • Had a problem with TPX[4] died/rebooted, was taken out for one run
  • Evening shift: Smooth Physics runs, some TPC issues (19:00-)
    • The itpc20 node died. iTPC rdo S20:4 masked out
    • tpx and itpc were going 100% persistently dead, masked out iTPC: RDO S13:3 ----- Both channels to remain masked out. Tonko looking into it, suspects a hardware issue
  • Owl shift: Smooth Physics runs
    • Issue with laser and cathode HV GUI, David restarted VME crate to bring back
    • Lost laser in the middle of the run (~06.00 am) ---- Alexei checked laser today morning, works fine

03/01/2022

Period coordinator change: Zaochen Ye ==> Sooraj Krishnan 

I. RHIC Schedule

  • Today: Collision for Physics
    • Local pol test with DX angle (any opportune end of store)
  • Tomorrow: 7:00--
    • 7:00-11:30    CeC access
    • 12:00-17:00  APEX
    • 17:00 -          STAR low lumi. fill

    II. Notable items/recap from past 24 hours: Smooth Physics

    III. Items from shifts:

    • Day shift: Smooth Physics runs (ended ~11:15am) + Cosmic runs
      • Masked out iTPC sector 13 RDO 3
      • Alexei restarted Laser PC, fixed laser GUI no-response issue
      • Akio uploaded a new electronic gain file for FCS ECal, Jeff updated associated production ids
    • Evening shift: Smooth Physics runs (19:00-)
      • sTGC lost communication, and both crates had to be rebooted and powercycled
      • BEMC sector 10 had hits missing for half a run. Had to be powercycled
    • Owl shift:       Smooth Physics runs 

    IV. Other items: may request to reduce CeC and APEX time to get more physics runs


    02/28/2022

    I. RHIC Schedule

    • Today: Collision for Physics
      • will have a local pol test with DX angle 14:00, nominal end of store 14:25
    • Tomorrow: Collision for Physics

    II. Notable items/recap from past 24 hours: Cosmics + Smooth Physics

    III. Items from shifts:

    • Day shift: (first a few hours) Cosmic runs + (11:15) Smooth Physics runs
      • many auto-recoveries in TPX, iTPC, TOF
      • sTGC error "ROB #01 bad FEB.”, powercycle ROB #1
      • FST 100% dead, no errors in scrolling DAQ messages. Power cycled FST.
    • Evening shift: Smooth Physics runs when beam available (no beam: 19:00-21:00)
      • TPC Anode Trip, GMT HV needed powercycling
    • Owl shift:       Smooth Physics runs 
      • missing bins in EEMC Tower ADC plots, expert power cycled crate 92, and performed master reload
      • laser control GUI (where the camera is controlled) froze, Alexei will investigate during day shift
      • ITPC: Sector 23, RDO 4 -- powercycle then restart run (several times)

    Other discussions: request low lumi runs (Wed or Thu ?) 

    02/27/2022

    I. RHIC Schedule

    • Today: Collision for Physics till Monday morning, exchange the cathodes (~ 2 hours) 

    II. Notable items/recap from past 24 hours: Cosmics + Physics + Cosmics

    III. Items from shifts:

    • Day shift:  Cosmic runs + 3 Physics runs
      • node TPX[19] removed / masked from run configuration
    • Evening shift: 13 Physics runs
      • node TPX[19] removed / masked from run configuration
    • Owl shift:       5 Physics runs + Cosmic runs
      • TPX: RDO S01:6, TPX: RDO S01:3, ITPC: Sector 1, RDO 4, ITPC: Sector 10, RDO 3 
      • partial trip of BTOW PMT 10
      • Beam lost ~2:40 (failure of a Yellow Abort Kicker, requires a Thyratron replacement) 

    Other discussions: CAD should do a better job to deliver beam  

    02/26/2022

    I. RHIC Schedule

    • Today: Collision for Physics till Monday morning, exchange the cathodes (~ 2 hours) 

    II. Notable items/recap from past 24 hours: Cosmic runs

    III. Items from shifts:

    • Day shift:  Cosmic runs
      • Alexei changed a burned mirror for East laser (camera 3), more laser tracks on East part of TPC
      • 12:37 Tonko fixed a bug in the ROB firmware which caused odd behavior under sparse data taking conditions (cosmics, low lumi runs). The new code version is 0x126 (old, up to now, was 0x124)
    • Evening shift: Cosmic runs
    • Owl shift:       Cosmic runs
      • sTGC air flow alarm went off, slow control issue
      • Tonko: “Now all STGC RDOs have the newest firmware version (0x126).” 4:40am


    02/25/2022

    I. RHIC Schedule

    • Today: 7:30 access to exchange garages, start bake load-lock chamber (~ 8 hours)
    • Collision for Physics till Monday morning, exchange the cathodes (~ 2 hours) 

    II. Notable items/recap from past 24 hours: Access + Low lumi + Smooth Physics

    III. Items from shifts:

    • Day shift:  Smooth physics till 13:30 + CeC Access
      • A few powercycles of iTPC RDO, sTGC ROB, BTOF, DAQ
    • Evening shift: Low lumi runs till 21:13 + normal physics runs started 23:59
      • testVernierScaler, forwardCrossSection, zdc_Polarimetry_2022
      • trigger was 100% dead. sTGC plots looked abnormal, and had missing tiles, stgc_n_sigma reset to 12 and 8, taking a production run to check what the issue is with sTGC, still open issue (days to solve) will request another low lumi runs after figure out sTGC issue
      • MXQ crate had to be powercycled
    • Owl shift:       Smooth Physics runs till 5:30 am (beam lost)
      • False magnet trip alarm, apparently due to communication or slow control glitch
      • BEMC PMT box 10 west partially tripped, solved by reapplied HV

    Others: if plan to make use of the access time, please bring up

    02/24/2022

    I. RHIC Schedule

    • Today: 14:00-24:00 STAR Low lumi. Fill with Field on (Forward experts should watch the online plots to make sure detectors are running good)
    • Friday: Aperture scan and crossing angle for local pol. measurement

    II. Notable items/recap from past 24 hours: Access + Smooth Physics

    III. Items from shifts:

    • Day shift: Access
    • Evening shift: Smooth Physics started from 18:33
    • Owl shift:       Smooth Physics
      • FCS: persisting errors “FEE count 44, expected 5”. FCS GUI no responding, Oleg power cycled FEEs manually cleared error message, however the GUI is still frozen, solved by restart IOCs
      • iTPC node 13 died at end of the evening fill, back online after powercycle corresponding computer


    02/23/2022

    I. RHIC Schedule

    • Wed: 7:00-13:00 Maintenance
    • Thu: 16:00-24:00 STAR Low lumi. Fill with Field on 

    II. Notable items/recap from past 24 hours: Smooth Physics

    III. Items from shifts:

    • Day shift: CeC access + Smooth physics + AGS Snake issue
      • TPC gating grid lost communication, solved by reboot software
    • Evening shift: Smooth Physics + a few issues
      • Dimuon triggers were not included in time
      • "Incorrect Trigger Clock 76 MHz" error due to stuck in PLL. Had to powercycle TCD and turn MXQ on
      • FCS PS channel U302 got stuck. Power cycling MPOD crate recovered that channel
    • Owl shift:       Smooth Physics

    IV. Access today:

    • Tonko: DAQ tests with TPX and iTPC, don't need access but no runs with Run Control, finished 10:15.
    • MTD group: 2.5 hrs with field cosmics after Tonko’s test. MTD HV 13.2 kV to check eff. Vs. 13 kV
    • Wayne:
      • replace a disk in the computer named deneb, on the 2nd floor of the South Platform
      • apply some routine software and OS updates to the shift-leader PC in the Control Room

     

    02/22/2022
    I. RHIC Schedule

    • Today: 10:00-12:00 CeC access, after will be Collision for physics
    • Wed: 8:00-16:00 Maintenance
    • Thu: 16:00-24:00 STAR Low lumi. Fill with Field on 

    II. Notable items/recap from past 24 hours: Smooth Physics

    III. Items from shifts:

    • Day shift: Smooth physics
      • BTOW: PMT box 10 west partially tripped. Solved by reapply HV
    • Evening shift: Smooth Physics 
      • MTD HV BL 24-30 HV cannot be set to full from standby at first try, second try succeed
      • ETOW crate Id 5 configuration failure, solved with expert’s help
    • Owl shift:       Smooth Physics 
      • BTOW: PMT box 10 west partially tripped. Solved by reapply HV
      • sTGC  error: 1: FOB 4 (R#13:F#77) NOT found, power-cycle ROB #13

     

    02/21/2022
    I. RHIC Schedule

    • Today-Tuesday morning: Collision for physics

    II. Notable items/recap from past 24 hours: Smooth Physics + a few issues

    III. Items from shifts:

    • Day shift: Smooth physics
      • GMT -> APV -> pedestals_APV_0-3 pulse height for all channels in ARM:0, APV0 shows a second band and scattered values all throughout
      • BTOW Crate Id 1: Oleg put under manual control -> Watch trigger rates and reconfigure if needed
    • Evening shift: Smooth Physics 
      • Global communication problem, fixed by David
      • "too many recoveries - powercycle TOF LV”, Tray 67 (dead, masked out) was accidentally enabled by DO, which caused the failure of auto recovery. Zaochen updated instruction to reduce the mistake
    • Owl shift:       Physics run + issues
      • DAQ issue: EVB[3] [0xF103] died/rebooted) EVB3 is suggested to be out of run for now. writing data issue (No space left on device)
      • error " Incorrect Trigger clock 10.000 MHz”,  solved by powercycle TCD and turn on MXQ from off. Tonko:”One TCD board was stuck and couldn’t lock its PLL, it happens after too many Trigger reboots”
      • high rates of JetPatch triggers, solved by reconfigure BEMC. Related to TCD powercycle.
      • 6:17am, TPC Gating-grid monitor lost connection, need access to powercycle hardware (8:50am, Tim cycled crate 50, gating grid control is back now)
      • IOC for sTGV LV lost connection, solved by David
      • Laser brightness has no response to control, drift velocity looks normal. Alexei fixed this. 

    02/20/2022
    I. RHIC Schedule

    • Today-Tuesday morning: Collision for physics

    II. Notable items/recap from past 24 hours: Smooth Physics
    Main discussions: Carl: "correct" bunch crossing has barely more hits than any of the visible out of time bunches. In this environment, it’s impossible to see if 2-5% of the triggers are late by a RHIC tick.  We’re going to need a 28 or 56 bunch fill to answer this question. ZDC_Polarimetry runs can go back to “TRG + DAQ only”, Will try “TRG + DAQ + FCS” on sometime in middle of next week (Wed-Fri, decide in schedule meeting?)

    III. More items from shifts:

    • Day shift: Smooth physics
      • FST plots were empty during run, but filled after run stops.
      • JP triggers hot, solved by restart run
      • Run stopped by EQ crate, solved by power cycle and reboot trigger then restarted run
    • Evening shift: Smooth Physics 
      • PMT box 10 west partially tripped, solved by reapplied HV
      • from run 23050064: "dimuon" and "hit_dimuon_mon" are on Enabled list
    • Owl shift:         Smooth physics run
      • BTOW Configuration failed for crate 1 (will mask out from auto configuration, do similar thing as crate 8)


    02/19/2022
    I. RHIC Schedule

    • Today-Tuesday morning: Collision for physics

    II. Notable items/recap from past 24 hours: Smooth Physics when beam available 

    • GMT HV alarm went off, HV has no response to “Physics” or “Standby” mode in GUI, Nikolai fixed the GMT. It had lost connection. Fixed by rebooting the crate and restarting the controls

    III. More items from shifts:

    • Day shift: Physics started 10 am

       

      • sTGC air blower, solved and discussed yesterday 10 am
      • Alexei fixed TPC air blower 
    • Evening shift: Smooth Physics 
    • Owl shifts: Smooth Physics run (till 5:30, injection issue, solved 7am)
      • FCS FEE LV caused missing hits in FCS ECal's North 18~24 and high rates of FCS ECAL triggers. Resolved by powercycling FCS FEE LV
      • BTOW Configuration failed for crate 1 (23050011, 23050013) and L2 issues (23050016)
      • iTPC errors (S2, RDO 1; S16 RDO 1). Tonko fixed

    IV. other items:

    • zdc_polarimetry runs please do with (TRG+DQA+FCS)

     

     

    02/18/2022
    I. RHIC Schedule

    • Today-Tuesday morning: Collision for physics

    II. Notable items/recap from past 24 hours: Snake scan + Physics + IssueMain issues: 

    • TPC gas alarm at 03:35 and called Prashanth and Alexei.
    • 4:20 there was a power dip, which caused magnet trip, problems with water system and sTGC blower trip.
    • Alexei and Prashanth fixed the TPC gas and water issues, CAS brought back the magnet
    • 7:10 Tim and Mike access to fix issue with sTGC air blower (6:00, MCR: a spark in ion source). Compressor not running, after replaced compressor contractor, Compressor is now running. Replaced LV relay for preventative maintenance. Mike connected aux contacts from compressor relay to monitor contractor state. Tim replaced sTGC AHU Contactor, Capotosto added remote monitoring for contactor state to sTGC interlock cabinet. Just fixed all issues, sTGC can be added in the run now.
    •  GMT HV alarm went off, HV has no response to “Physics” or “Standby” mode in GUI, issue is still open

    III. More items from shifts:

    • Day shift: snake scan for spin direction

       

      • BTOW is flashing blue (W) for >30 minutes
    • Evening shift: Smooth Physics while beam is available
      • Beam came at 19:35, beam lost at 21:12, beam back at 23:20
    • Owl Shift: Smooth physics run
      • Trigger: BBQ and EQ2 VME crates tripped, then L2 was stopping the run. Powercycling BBC and EQ2 crate and rebooting the trigger solved the problem.
      • Online plots issues of TPC, MTD and ESMD. 

    IV. other items:

    • Request for 2 polarimetry runs at the end the fill

     

    02/17/2022
    I. RHIC Schedule

    • Today:       8:00-14:00 Snake scan for spin direction
    • Tomorrow: Collision for physics

    II. Notable items/recap from past 24 hours: General: APEX + Smooth physics

    • Main issues: A mistake to assign channel 1 of the 2nd board to channel 1 of the 1st board. So the channel was not turned on when controlled by the GUI. David fixed it and HV is back on that sTGC chamber. Online plots back to normal. 

    III. More items from shifts:

    • Day shift: APEX

       

      • Alexei replaced GMT tank-1
      • David found HV channel was mis-mapped, online plots shows normal
    • Evening shift: Smooth Physics started 21:40
      • JP1 was high due to hot trigger patches on crates 0E, 0D, 05, solved by power cycle. However, crates 0C, 07, 0F was not get configurated at start of next run. Oleg Tsai helped manually configure these crates.
    • Owl Shift: Smooth physics run
      • high JetPatch trigger rates, Oleg manually configured BTOW crate 1E. Restarted run, JP rates are normal.
      • Twice problems when stopping run (EVB[5] [0xF105] died/rebooted; ITPC[23] [0xBF97] died/rebooted)

    IV. other items:

    • Remind shiftcrew to check more frequently on the critical plots

    02/16/2022
    I. RHIC Schedule

    • Today:        9:00-19:00 APEX (Au, 3.85 GeV/n)
    • Tomorrow: 8:00-19:30 Siemens test and Rotator scan?

    II. Notable items/recap from past 24 hours: General: Access + Smooth physics (Owl)

    • Main issues: David, Prashant, Mike, Tim installed sTGC HV boards to MTD CAEN HV crate, rerouted and reconnected HV and interlock cables. However, sTGC still looks strange in test pedestals and cosmic runs. Experts will check on RDO and others

    III. More items from shifts:

    • Day shift: Access
      • David, Prashant, Mike, Tim installed sTGC HV boards to MTD CAEN HV crate, rerouted and reconnected HV and interlock cables. However, sTGC still looks strange in test pedestals and cosmic runs
      • Jeff cleaned up and changed some cables for iTPC DAQ machines
    • Evening shift: Physics run started 23:50
    • Owl shift: smooth physics run
      • 2B quadrant was empty in sTGC (hits/Quadrant) online plots.
      • Masked ITPC: Sector 11, RDO 4

    IV. Other items?

     

    02/15/2022
    I. RHIC Schedule

    • Today:        7:00-15:30 maintenance, then collisions for physics
    • Tomorrow:  9:00-19:00 APEX (Au, 3.85 GeV/n)

    II. Notable items/recap from past 24 hours: General: Access + Smooth physics run

    • Main issues:
      • sTGC HV GUI lost connection, David and Tim checked sTGC CAEN HV crate, no solution, run without sTGC
      • Partial trip of BTOW PMT box 10 West, solved by reapplying HV to this PMT box

    III. More items from shifts:

    • Day shift: Access
      • sTGC bad chamber was checked by Prashanth and Tim, will order parts and replace later
      • FST coolant refilled by Xu and Yu
      • Wayne fixed issue with BBC_DSM2 and BCW_DSM2, if same problem happens, call trigger experts first, may call Wayne later.
    • Evening shift: Access + Smooth physics run
      • sTGC HV GUI lost connection, David and Tim checked sTGC CAEN HV crate, no solution, run without sTGC 
    • Owl shift: smooth physics run
      • no sTGC in the run

    IV. Other items?

    02/14/2022
    I. RHIC Schedule

    • Today:      7:00-15:30 maintenance, then collisions for physics
    • Tomorrow:  7:00-15:30 maintenance, then collisions for physics

    II. Notable items/recap from past 24 hours:

    • General: Smooth physics run 
    • Main issues:
      • Error configuring BBC_DSM2 Node and Error configuring BCW_DSM2 Node. Reboot all did not help. Powercycling the corresponding VME crates and reboot all did not help either. Had to take out BBC_DSM2 and BCW_DSM2 from the Component tree (experts are working on it)
      • Tonko masked Crate Id 8 from _any_ configuration, Oleg shows how to manually reconfigure BTOW crate 8, will need to keep watching the performance

    III. More items from shifts:

    • Day shift: smooth physics run
      • BTOW crate 8 config. Err., Tonko: disable à rates too high à keep old procedure
      • Online QA plots lost, solved by reset jevp server
      • Lost connection of sTGC LV, solved by David T.
    • Evening shift: smooth physics run
      • iTPC: Sector 4, RDO 3 powercycled
      • sTGC LV errors, solved by power-cycle
      • BTOW: PMT box 10 west partially tripped, solved by reapply HV 
    • Owl shift: smooth physics run
      • Error configuring BBC_DSM2 Node and Error configuring BCW_DSM2 Node, moved out from the Component tree

    IV. Other items?

    • Access plan on Monday-Tuesday (8 hrs each day)
      • Magnet maintenance: Water Supply Group will be cleaning the strainers in the STAR magnet power supplies during the maintenance on Monday.  This involves de-energizing STAR magnets.
      • FST: Refill the coolant for FST, need about 30 mins for the fill then 20~30 mins cosmic runs to verify
      • others?

     

    02/13/2022
    I. RHIC Schedule

    • physics run

    II. Notable items/recap from past 24 hours:

    • General: Smooth physics run 
    • Main issues:
      • DO had difficulty to reboot CANBUS to solve BTOF LV errors, due to the mistake in password list
      • Several EVB errors
      • BTOW configuration errors happen many times from crate 8 (Oleg will provide solutions for the shifts to help reduce the down time for the data taking)

    III. More items from shifts:

    • Day shift: smooth physics run
      • BTOW configuration issues solved by restarting run while waste a lot of beam time (experts will provide better solution)
      • 2: FOB 2 (R#14:F#81) NOT found -- power-cycle ROB #14
      • 2: FOB 2 (R#14:F#81) NOT found -- power-cycle ROB #14
      •  
    • Evening shift: smooth physics run
      • Power-cycle BTOF LV
      • sTGC "3: FOB 0 (R#07:F#37) NOT found" errors, and we power-cycled ROB #07
      • sTGC "4: FOB 3 (R#16:F#94) NOT found" errors, and we power-cycled ROB #16
      • "EVB[3] [0xF103] died/rebooted"
      • "BTOW: failed crate Ids: 1; BTOW: configuration failed - watch BTOW triggers or restart run."
    • Owl shift: smooth physics run
      • l4 got stuck at the end of several runs, “EVB[3] [0xF103] died/rebooted”, “EVB[4] [0xF104] died/rebooted
      • BTOF LV errors, get difficulty to reboot the CANBUS due to the mistake in password list
      • a few BTOW Configuration failed for crate Id 8 and once for crate Id 1
      • iTPC RDO S20:1 was masked

    Other items?

    • Access plan on Monday-Tuesday (8 hrs each day)
      • Magnet maintenance: Water Supply Group will be cleaning the strainers in the STAR magnet power supplies during the maintenance on Monday.  This involves de-energizing STAR magnets.
      • FST: Refill the coolant for FST, need about 30 mins for the fill then 20~30 mins cosmic runs to verify
      • others?

     

    02/12/2022
    I. RHIC Schedule

    • physics run

    II. Notable items/recap from past 24 hours:

    • General: smooth physics run 
    • Main issues:
      • Lasers had no respond to controls well. Brightness does not change when pressing control button. Called Alexei.
      • East laser was fine for two laser runs, West laser was far from normal brightness

    III. More items from shifts:

    • Day shift: physics run, short access, physics run
      • Issue with L2, solved by reboot trigger
      • Jeff fixed the TOF+MTD noise run rate, runs ~14 kHz now.
      • 13:00-14:30: David and Prashanth refilled the sTGC n-Pentane
    • Evening shift: smooth physics run
      • Power-cycle iTPC sector 14, RDO 3 (twice)
      • BTOW configuration failed, restarted the run
      • "tpc_caen_alive" alarm went off, fixed by following manual
      •  sTGC “3: FOB 2 (R#11:F63) NOT found”, fixed by power-cycle sTGC LV ROB #11
    • Owl shift: smooth physics run
      • iTPC RDO S14:3 and S16:3 were masked out, (had error in 2 consequent runs
      • FCS error in DAQ, solved by restarted the run twice
      •  sTGC LV: 2: FOB 3 (R#10:F#58) NOT found -- power-cycle ROB #10 if the problem persists.

    Other items?

    • Lumi was a little too high than usual during Owl 
    • sTGC cable(chamber) 9
    • Access plan for Monday-Tuesday

    02/11/2022
    I. RHIC Schedule

    • Plan from RHIC
      • Collisions for physics now to Sunday

     

    II. Notable items/recap from past 24 hours:

    • General: APEX and smooth run
    • Main issues:
      • - sTGC: HV cable 9 trip

    III. More items from shifts:

    • Day shift: physics run at the beginning, then APEX
      • -       10:00, Power dip, magnet tripped, lost slow control. Experts solved the problem.
    • Evening shift: APEX ended around 22:00, Physics from 22:30
      • -        sTGC HV cable 9 tripped, ran without this channel
    • Owl shift: very smooth physics run
      • -       TPX 36 config error, solved by powercyle TPC FEEs and Reboot All (Comments: Reboot All is not necessary but no any harm)

    IV. Other items?

    -


    02/10/2022

    I. RHIC Schedule

    • Plan from RHIC
      • 8:00-16:00: LEReC APEX: 3:85 GeV/n Au
      • Today 16:00-Tomorrow 16:00: Collision for physics

    II. Recap

    • General: Rotator study, low lumi run (18:00-4:15am), 7:00 am normal physics run, smooth
    • Main issues:
      • sTGC: LV unresponsive, solved by short access (18:55). 
      • sTGC HV cable 9 trip, experts will discuss for a solution today
      • Timing during alignment fill, “extra” peak in preshower, ECal and HCal 8 time bins before “right” peak… (Carl Gagliardi), experts will follow up later.

    III. More items from shifts

    •  Day shift: Rotator study
      • 12:43: Wayne replaced the monitor for EPD-CR
    •  Evening shift: low lumi run
      • sTGC: LV1 is unresponsive, after a short access, run smoothly for the rest of the shift.
    • Owl shift: low lumi run till 4:15am, then normal physics 7:00am
      • sTGC HV Cable 9 trip, still open
      • BTOW configuration cautions, Rebooting not help but rates and critical plots are good

    IV. Others items?

    • Battery for the clock replaced
    • fast offline data production request for forward detector calibrations with the low lumi runs.

    02/09/22

       I. RHIC Schedule

    • Today's plan:
      • 08:00 - 14:00: Rotator study
      • 14:00 - 23:59: STAR low luminosity run (for alignment; magnet off, 0.2e11/bunch, 8 hr, 56x56)
      • 00:01 -8:00:    collision forphysics 
    • Tomorrow’s plan:
      • 8:00-16:00: LEReC APEX: 3.85 GeV/n Au

       II. Recap

    • Main issue in past 24 hours:
      • 14:17: EEMC (ETOW, BTOW) config errors, “GUI is white” seen from SC3 but not seen in control room. (15:12) Dmitry K "Recovery & Restart", turned HV and MAPMT FEEs back on 
      • sTGC cable 9 HV trip, David fixed it, after 2 hrs run, tripped again, David T. checked on it (see his email). HV current frequent spikes even at a lower voltage may due to the dust in chamber. If this happen again, don’t need to take out sTGC from data taking. Just make a note which channel/cable tripped, i.e. is not at full voltage in the run. Tripped again during OWL, under David T’s control
      • 23:44, EPD GUI is saying that the current is too high for most channels, not affect data taking. Rosi pointed out that current values were fine, a problem with GUI, she will follow up with EPD maillist.
    • Open issues:
      • sTGC cable 9 HV trip, David T. will work on it this after noon, during the low lumi run.
    • Day shift: smooth run from the beginning to the end
      • Jeff fixed the issue effecting the sTGC critical plots (y-axis range).
      • Issues with BTOW and ETOW, Dmitry K worked on it.
      • EVB12 was out for a while in a run,  back in next run
      • sTGC LV issue, fixed by David.
      • sTGC cable 9 HV trip. After clearing trip and ramping up the channel tripped again. David advised a temporary solution of manually setting demand voltage to 2K until he fixes it. David fixed the issue. 16:18, same issue came again. 
    • Evening shift:
      • "GUI is white" when trying to access slow controls from SC3. Followed manual section "Recovery & Restart", turned HV and MAPMT FEEs back on – Dmitry Kalinkin
      • EPD GUI is saying that the current is too high for most channels, however not affect data taking. Rosi said current values were fine, a problem with GUI, she will follow up with EPD maillist. 
      • TOF critical errors around 8pm, POWERCYCLE TOF LV. Tried to power cycle at least twice, no luck. Geary instructed us to try one more time as the problematic trays are different.
      • Display for EPD computer went off. Michael will work on it in the morning
    • Overnight shift: Beam and physics for all time
      • Major issues with the sTGC Cable 9 HV supply.
      • sTGC cable 9 HV trip, slowly increasing HV by +10V or +5V, reached full, then tripped again, David worked on it. 
      • BTOW configuration cautions at the start of each run. Rebooting BTOW in Run cotrol or restarting run did not help. Rates and critical plots look OK, keep running.
      • TOF ADC critical plot: One run (23040009) had ADC entries at ADC = 15 and ADC = 30 for all channels. Restarting the run helped.
      • Relatively many TOF auto-recoveries. Apart from the run mentioned above, no indication of problems in the critical plots.
      • Had to powercycle a couple of iTPC RDOs manually which helped.

    02/08/22

       I. RHIC Schedule

    • Collisions for physics today
    • Tomorrow’s plan:
      08:00 - 14:00 Rotator study
      14:00 - 22:00 STAR low luminosity run (for alignment; magnet off)

      II. Recap

    • Machine development all day yesterday, beam back later in evening shift
    • EEMC QA plots having issues during day shift (first note @ 07:54), needed short access after MD (before physics) and replace HVSYS module A
    • In overnight shift: ETOW error messages in DAQ for crate 6 -> No success in fixing issue overnight; today’s morning shift advised to power cycle the crate and then issue a master reset; noted: this is the procedure, no need to contact on-call expert for this 
    • Period coordinator shift changeover Matt -> Zaochen

      III. Open issues/status

    •  

    More detailed discussion can be found on the ops list (https://lists.bnl.gov/mailman/private/star-ops-l/2022-February/006606.html)

    02/07/22

       I. RHIC Schedule

    • Machine development unitl around 17:00, back to physics after

      II. Recap

    • In terms of sampled luminosity, yesterday was best day so far: 6.1 pb-1 w/ 18.5 DAQ hrs

      III. Open issues/status

    •  Discussed prospects of dedicated aligment/forward cross section runs. Potential to do alignment run Wed and cross section run Fri. Will be discussed during sheduling meeting, etc.

    More detailed discussion can be found on the ops list (https://lists.bnl.gov/mailman/private/star-ops-l/2022-February/006594.html)

    02/05/22
       I. RHIC Schedule

    • Collisions for physics all weekend

      II. Recap

    • No beam 6:45 until around ~17:00
    • sTGC HV slowly ramped up during shifts; new temporary ramping procedure for cable 9 (see overnight shift log and https://lists.bnl.gov/mailman/private/star-ops-l/2022-February/006575.html)
    • Warning from TPC computer (chaplin): running low on GPU memory, usage at 99% (18:01)
    • Two TOF FM2 red alarms in overnight shift. FM2 plot shows sudden spikes (04:47)

      III. Open issues/status

    • Discussed on making the new temporary sTGC HV ramp the same for entire sTGC (i.e., automated)
    • BTOW crate 8 error - Noted shifts should just stop and restart run
    • Note on various reported high rates in logs, at beginning of fill this is expected, Jeff may change the baseline rate used to determine color-coding in monitor
    • Need to follow up with Wayne on TPC computer (chaplin) GPU memory usage warning
    • Alexie following up on current laser issue and TOF FM2 alarm

    More detailed discussion can be found on the ops list (https://lists.bnl.gov/mailman/private/star-ops-l/2022-February/006577.html)

    02/04/22
       I. RHIC Schedule

    • MCR access until 12:00, about 2 hrs to get back to physics

      II. Recap

    • Physics running started around 22:50 due to long injection tuning
    • Lots of errors overnight (see below), however no major downtime
    • sTGC HV trip for plane 2 cable 9. Powercycled the HV - Investigation still ongoing
    • sTGC LV (23:31) powercycled after getting following error: “ROB-15: FOB 1 (R#15:F#86) NOT found -- power-cycle ROB #15 if the problem persists” - This is also a correction note w.r.t. minutes sent durectly to star-ops

      III. Open issues/status

    •  Tentative plan to do rotator measurements/corrections Monday during day
    • When beam comes back today, add BBC back into run for testing
    • Investigation of sTGC HV ongoing.

    More detailed discussion can be found on the ops list (https://lists.bnl.gov/mailman/private/star-ops-l/2022-February/006569.html)

    02/03/22
       I. RHIC Schedule

    • APEX 9:00 - 17:00. Followed by physics.
    • Tomorrow: CeC access ~7:00 for 3 or 4 hrs; TBD if machine development with follow, or back to physics

      II. Recap

    • All contingent activities for APEX running were solved, so APEX will run today ~8hrs (started at 9)
    • Tomorrow’s afternoon activities still TBD
    • Identified incorrect terminating resistor on injection kicker (40 -> 25 ohm); will fix during tomorrow’s access, should improve performance
    • TOF HV turned off during evening shift, values frozen in GUI indicated “on” (fixed with "IOC reset FULL” and ramping HV)
       

      III. Open issues/status

    •  

    More detailed discussion can be found on the ops list (https://lists.bnl.gov/mailman/private/star-ops-l/2022-February/006560.html)

     

    02/02/22
       I. RHIC Schedule

    • 07:00 to 11:00: RHIC access -- CeC clean room, Booster access for foil, back to physics
    • Thur: 09:00-17:00: APEX (if ready). Followed by physics.
    • Fri: 07:00-12:00(?): RHIC access injection kicker module replacement + CeC access; imperfection study after(TBD)

      II. Recap

    • Two (last minute) Vernier scans in day and evening shifts - Will update crew checklist to note what to do if this happens again
    • Booster RF power amplifier leak, extended evening shift fill, no beam after 2:55
    • Major issue with DAQ starting @ 18:20. Jeff stepped in and was able to eventually pinpoint the problem. Noted computer disc may need to be changed - Done by Tonko
    • Alarm for VME crates, lost communication and QA plots. David worked on this for awhile. Decided access is needed (23:43). Accessed today at 9:30 - eveything should be OK now
       

      III. Open issues/status

    •  

    More detailed discussion can be found on the ops list (https://lists.bnl.gov/mailman/private/star-ops-l/2022-February/006549.html)

    02/01/22
       I. RHIC Schedule

    • Collisions for physics
    • Tomorrow: likely APEX from 9:00 - 17:00

      II. Recap

    • No beam from ~7:00 -~19:30 yesterday
    • Smooth running once beam returned
    • TOF gas changes twice in past 24 hrs
    • JH will bring up request for low intensity run for forward x-sec and alignment runs @ RHIC coordination meeting; alignment run will need magnet off, warmer ambient temperatures should be considered when doing this
    • David will increase current alarm for sTGC from 200 micro amps to 300
       

      III. Open issues/status

    • ETOF HV proceedure has been changed, shifters should change to OFF for all states
    • APEX runing for tomorrow contingent on a few ongoing RHIC/AGS activities. If postponed, tomorrow will be physics, and APEX will run Thursday

    More detailed discussion can be found on the ops list (https://lists.bnl.gov/mailman/private/star-ops-l/2022-February/006531.html)

    01/31/22
       I. RHIC Schedule

    • Collisions for physics

      II. Recap

    • Luminosity highest in recent fills; improvement in RHIC polarization (~50%)
    • Issues at linac, unclear when will be fixed
    • Afternoon access requests: CeC 4 hrs, Jet 1/2 hr; may be earlier in morning  -> Will hear back from MCR/Vincent et al. shortly on schedule
    • Relatively smooth running over past 24 hours
    • Sampled luminosity: 5.1 pb-1 w/ 15.5 DAQ hrs; today 1.5 pb-1
    • Forward x-sec strawman trigger config test with low lumi in run 23030047 - will request fastoffline production

      III. Open issues/status

    • Day shift will take a laser run ASAP to see if they can get West laser working; if problems Alexie may go in to make some adjustments
    • David will make changes to alarm handler (see FCS HCal PS issue in elog in overnight shift)

    More detailed discussion can be found on the ops list (https://lists.bnl.gov/mailman/private/star-ops-l/2022-January/006515.html)

    01/30/22
       I. RHIC Schedule

    • Collisions for physics

      II. Recap

    • Very smooth running over past 24 hours
    • Sampled luminosity (note with updated x-sec): 4.6 pb-1 w/ 16.5 DAQ hrs; today 2.2 pb-1
    • Forward x-sec strawman trigger config test in Run 23029038; todays test at end of fill should aim to have MCR displace beams to achieve ZDC rate between 10-20k (~10 min); request offline to produce runs ASAP

      III. Open issues/status

    • Will test forward x-sec strawman trigger config today with low lumi beams at end of next fill

    More detailed discussion can be found on the ops list (https://lists.bnl.gov/mailman/private/star-ops-l/2022-January/006495.html)

    01/29/22
       I. RHIC Schedule

    • Collisions for phyics over weekend

      II. Recap

    • Weather conditions very bad due to winter storm. So far no major issues reported
    • Sample luminosity: yesterday: ~4 pb-1 w/ 17 DAQ hrs; today: ~1 pb-1
    • Updated Vernier scan x-sec: 2.06 mb, all plots will be updated with new value
    • Very high JP1, JP2, ETH1, … rates noted in Run 23028049, had to power cycle and reconfigure EEMC (with experts), also next shift noticed high JP1 rates in one run
    • Ground fault of substation 6A @ 7:22 (another at 10 AM), addressed promptly by CAS watch
    • Forward x-section measurement strawman trigger configuration provided by Jeff, tested in Run 23028062 (in between fills). Test today @ end of fill - 5 min needed (note: another test tomorrow with beam steering for low lumi.)

      III. Open issues/status

    • Forward x-section measurement strawman trigger configuration will be tested today at end of fill. Should also be tested tomorrow with a lower luminosity
    • Shuai: will update dimuon timing window trigger configuration

    More detailed discussion can be found on the ops list (https://lists.bnl.gov/mailman/private/star-ops-l/2022-January/006485.html)

    01/28/22
       I. RHIC Schedule

    • Collisions for physics over weekend

      II. Recap

    • ZDC polarimetry done during rotator test
    • FCS gain tests after rotator test (some thresholds changed after)
    • Smooth running when beam came back during overnight shift
    • sTGC settings changed in evening shift, some “spikes” seen in QA plots (02:33)
    • New PROM check run added by Tonko to be done before; will update detector readiness checklist
    • Vernier scan was performed, preliminary cross section is lower than Run17
    • Carl proposed we take out dimuon trigger for first 1.5 hrs of a fill, and agreement was reached. This has to be done manually by shifters. Jeff will conveniently move where the SL can do this on the computer to be right next to laser run trigger.
       

      III. Open issues/status

    • We should start planning the low luminosity runs (as early as next week). Jeff is working on the configuration, once ready we will ask MCR to steer the beams to lower rates and take a run with this configuration at the end of a fill

    More detailed discussion can be found on the ops list (https://lists.bnl.gov/mailman/private/star-ops-l/2022-January/006475.html)

     

    01/27/22
       I. RHIC Schedule

    • CeC will take 4 hrs of dedicated time
    • 1/2 hr coupling test
    • Rotator test after CeC/coupling test (STAR will do zdc polarimetry runs as we did last rotator scan, one before and after rotator ramp, we will also do a FCS gain check at the end of store for ~30 min.)
    • Back to physics

      II. Recap

    • Blue injection efficiency is getting worse every fill, reducing luminosit
    • AGS polarization also seems to also be decreasing
    • Smooth running after maintenance, albeit with lower intensity
    • Various maintenance day activities compled yesterday, see elog for details
    •  

      III. Open issues/status

    • Jeff/Tonko: doing a few checks today, but may begin to have a daily PROM check run
    • Daniel: sTGC will have SL update configurations for pedestals
    • Hank: still investigating BBC crate issue
    • Xu: Current in problematic FST module seems to be stabilizing. Will keep an eye on for next 24 hrs, and may start to slowly ramp HV back up
    • David: Now has updated manuals ready for sTGC, also compressor monitoring 

    More detailed discussion can be found on the ops list (https://lists.bnl.gov/mailman/private/star-ops-l/2022-January/006456.html)

    01/26/22
       I. RHIC Schedule

    • Maintenance from 7 AM until around 3 (likely till 5 or so) PM, back to collisions for physics
    • Tomorrow: 4 hrs for CeC starting after natural end of a fill, some time between 9-12

      II. Recap

    • 3 good fills yesterday, one fill this morning with reduced luminosity
    • Yesterday: sampled lumi 4.1 pb-1 w/ 17 DAQ hrs; Today: 1.2 pb-1
    • Crew enable BBC readout on evening shift as requested by Hank
    • Overnight L2 timeout error was from MXQ, not BBC. Will work on making this more explicit for shifters
    • New items during maintenance time:
      Hank/Jeff, will need control system to test new tier 1 file, will convene around 15:00 to start
      Akio, work on EPD

      III. Open issues/status

    • Manual for sTGC will be updated with current procedures
    • Explicity logging for L2 timeouts to aid shifte

    More detailed discussion can be found on the ops list (https://lists.bnl.gov/mailman/private/star-ops-l/2022-January/006438.html)


    01/25/22

       I. RHIC Schedule

    • Collisions for physics today
    • Tomorrow maintenance from 7 AM until around 3 PM, likely to extend longer

      II. Recap

    • Consistent 2+1 ongoing fills since last meeting; Yesterday: 4 pb-1 w/ 15.5 DAQ hrs (best day for int. lumi/hr); today: ~1.5 pb-1
    • FCS taken out of run during day shift, put back in during overnight shift
    • Unusual fill pattern for sTGC shift crew plots, Jeff and Daniel B we notified, not immediately obvious is anything is wrong but Jeff will followup, and Daniel may change plotting scales
    • One more (noted) online plot crash during evening shift
    • Several activities planned for tomorrow's maintenance day. See detaIls in star-ops log linked below
    • Daniel requested a special ~30 min run for sTGC at end of fill at 3:25 PM today, will coordinate with PC & SL

      III. Open issues/status

    • FST still tripping, experts changing trip limits and HV, expect to be OK after tomorrow's maintenance
    • Manual for sTGC will be updated with current procedures

    More detailed discussion can be found on the ops list (https://lists.bnl.gov/mailman/private/star-ops-l/2022-January/006406.html)

    01/24/22

       I. RHIC Schedule

    • Collisions for physics
    • Maintenance schedule for Wed. (8 hrs?, depends on yellow rotators TBD)

       II. Recap

    • Consistent stores in terms of luminosity (slight improvement w.r.t. yesterday’s report); two beam dumps with ~3+~6 hr downtimes
    • Yesterday: 3.2 pb-1 & 13 DAQ hrs; today: ~1.2 pb-1
    • Some continuing FST HV trips in evening shift; Xu adjusted limits and HV again
    • Fire department had to check a (false) smoke alarm in the cave at 22:30 (during downtime so no interruptions to data-taking)
    • EEMC, TPC/iTPC out for a few runs during overnight shift; in both cases experts were consulted and eventually came back online (fixed with: rebooted crate 90, “reboot all” in run control, respectively)
    • GMT bottle B has been replaced (18:14)
    • Online plots crashed twice over 24 hrs
    • Noted that a full rack of TPC power supplies have browned out a few times over the past week (TPC/iTPC sectors 13,14,15). This is an open issue and Wed. during access we should check it out
       

       III. Open issues/status

    • TPC power supply rack experiencing brown-outs. Needs to be investigated during next access
    • FST: plan to access Wed to swap boards (see details above and previous 10 am meeting minutes)
    • Jeff is looking into the online plot crashes

    More detailed discussion can be found on the ops list (https://lists.bnl.gov/mailman/private/star-ops-l/2022-January/006388.html)

    01/23/22

      I. RHIC Schedule

    • Collisions for physics

      II. Recap

    • Generally good 24 hours of data taking with no major issues. 3.3 pb-1 & 15 DAQ hrs.
    • Besides low luminosity fill early yesterday following fills look better (2+1 ongoing fills)
    • Already accumulated ~1.5 pb-1 this morning
    • FST Disk 1 Module 12 outer sector: ISEG module board trips due to current fluctuation, current limit increased and HV decreased. ABC board needs to be swapped next access - Xu Sun (see entries 8:34 and 09:48 from today, and overnight shift starting from 06:46)
    • Online plots crashed again this morning. Shift crew have noted in the log book where to find instructions to restart
    • >2000k errors in VC1 in one particular run last evening shift (54). Not noted in log book but assumed shift followed instructions. Was noted during meeting just for information

      III. Open issues/status

    • Alexie will work on addressing gas alarms
    • See FST issue above, will wait till tomorrow to formulate a plan on when to access

    More detailed discussion can be found on the ops list (https://lists.bnl.gov/mailman/private/star-ops-l/2022-January/006380.html)
    01/22/22

      I. RHIC Schedule

    • Collisions for physics today and tomorrow

      II. Recap

    • CeC all day starting from 8 AM
    • Smooth running overnight when beam came back after CeC (1:30 AM)
    • Beam intensity low in first fill of the day
    • Few iTPC RDOs masked out during cosmics and overnight runs
    • David called and had VME slow controls reset (13:11)

      III. Open issues/status

    • JH will communicate with accelerator to use a 280k ZDC rate threshold to optimize data taking
    • Increased TPC dead time was noticed for an early run by Jeff (fixed by itself next run). SL can check for this under the “Current rates” tab on the DAQ page. If any noticeable changes in rates/dead times during run -> restart run
    • Transient alarm from TOF pressure-transmitter tof_gas_PT-2 (yellow warning, low pressure) - Alexie said he will change today
    • The online plots may crash and not update to latest run, SL needs to check with shift crew each run early

    More detailed discussion can be found on the ops list (https://lists.bnl.gov/mailman/private/star-ops-l/2022-January/006374.html)

     

    01/21/22

     
      I. RHIC Schedule

    • Cec Au beam dev. will happen today from 8 AM - 9 AM. End time may shift either way by one or two hours (likely end around 8 PM)
    • Collisions for physics after CeC and over the weekend

      II. Recap

    • Generally last 24 hours have been good, new run procedure and new beam sequence has nicely improved data taking (~17 DAQ hrs and sampled lumi = 4 pb^-1)
    • Jeff implemented the new production_pp500 configuration disabling BBC readout during day shift
    • Short access in between fills to check sTGC blower (~6 PM). It was running fine and also so was the compressor. Alexie also tuned the laser
    • The alarm handler is not connecting to the VME crates. That is something to be checked when there is no beam (see entry at 23:57)
    • Beam dump @ 01:13 overnight; no beam for rest of shift
    • Slight improvment in polarization and also new beam sequence with imediate first beta* squueze at begginging of fill; second at ~3hrs
    • Unclear when Siemens will (and if) come back online; at least another week of investigation 
       

     III. Open issues/status

    • David/Alexie will try to add more monitors for TPC to aid shifters
    • Jeff is working on the error logs to aid shifters remedy the configuration error messages that occurred over last few days; suggest maybe to run pedPhys run at end of extended downtime to preemptively identify any issues instead of during physics run.


    More detailed discussion can be found on the ops list (https://lists.bnl.gov/mailman/private/star-ops-l/2022-January/006369.html)

    01/20/22

      I. RHIC Schedule

    • Physics running till 8 AM tomorrow then CeC Au beam devellopment until 9 PM tomorrow night

      II. Recap

    • Beam dump @ 8 AM; nothing for rest of morning shift due to RHIC issues with rotator scan; took calibration runs
    • Polarimetry runs start at 19:01, and continue until 22:52 (three 3M runs per setting [3x5])
    • Delay in run start due to “l4Disp” in component tree; recovered automatically in 7 min. (00:49) [not critical]
    • High temp. in sTGC air blower; sTGC removed from one run; was already in recovery after previous shift notified Prashanth (00:56)
    • Few TPC RDOs masked out during shift

     III. Open issues/status

    • sTGC reference plots now available
    • Today: Short access (~5 min.) in between fills to check sTGC air blower; current scheduled beam dump is 6 PM
    • Follow-up on BBC crate/L2 timeout discussion. Jeff will set new configuration to not read out BBC to remove this issue
    • Missing HV sections from TPC sectors 8,9,10,11 from yesterdays runs, pointed out by Flemming, will have some followup with potential plots/alarms from Flemming/Alexie

    More detailed discussion can be found on the ops list (https://lists.bnl.gov/mailman/private/star-ops-l/2022-January/006352.html)

    01/19/22

      I. RHIC Schedule

    • Rotator scan until 4 PM
    • Physics evening/overnight and tomorrow

      II. Recap

    • Beam dumps at 8:00 and 10:55 (beam resumption at 9:40, 16:00)
    • L2 error "More than 2000 timeouts... Suppressing further timeout error messages. STOP the run!” (16:15)
    • Magnet trip 17:22, followed by alarms in sTGC and TOF; fixed by David and Isaac, respectively (see log entry 18:05). EEMC w/ Radstone problem found by Will (see entry at 20:37); Troubles ramping until filter in magnet cooling tower replaced by CAS (20:20) - Following up on periodicity/consistency of filter cleaning schedule
    • In parallel to magnet problem, MCR reports cryo problems. No beam till end of shift
    • FST HV RDO issue at end of shift (00:12) ("Failure code 2”) while turning on), procedure followed to restore; fixed by expert
    • Problems with FCS ("FCS:DEP09:1 failed”); runs until 2:39 had no FCS until fixed by Tonko (see entry at 2:39); also around 3:53
    • Run 23019011 - scalar rate of BBC, VPD, ZDC and EPD are much higher than expected (plot in entry at 2:16) - Jeff looking into this (possible mis-config.)
    • TPC GUI computer frozen (5:18); had to restart computer ; Possible solution to be investigated - Wayne
       

     III. Open issues/status

    • Optimization of fills/beta squeezes - JH will discuss with Vincent
    • TPC GUI computer freezing will be looked at by Wayne
    • Large scaler rates for Run 23019011 will be checked by Jeff
    • sTGC critical plots for shifters

     More detailed discussion can be found on the ops list (https://lists.bnl.gov/mailman/private/star-ops-l/2022-January/006347.html)

    01/18/22

      I. RHIC Schedule

      II. Recap

    • Tim fixed the compressor for the sTGC-FCS air blower after the meeting yesterday (https://online.star.bnl.gov/apps/shiftLog/logForEntry.jsp?ID=63743)
    • 19:30 Magnet trip
    • DAQ error "Trigger Bits corrupted: (0-5) set=0x00000000 read=0x00000040 addr=0x512114". This is currently unknown. Jeff is looking into it.
    • BBC taken out last night. The BBC TAC difference was empty since. Akio contends that this plot is empty whenever the BBC is out, so it's not indicative of a problem. Akio is changing his code so that it might be an indicator in the
    • 4 low lumi fills (2*10^10 bunch intensity -- 56 with STAR magnet off and 28 bunches) requested for later this week. Nominal schedule is 9am Saturday to 9am Sunday. Refer to Elke's slides for more detailed and current information

     

      III. Open issues/status

    • We need some clarification on the BBC crate issue. I thought this TAC difference plot was important. If not, the instructions need to be updated.
    • Short instructions for fixing holes in the BEMC need to be updated. They were proven wrong last night.
    • sTGC still has no critical plot instructions.
    • AH GUI still mislables the air blowers.

     More detailed discussion can be found on the ops list (https://lists.bnl.gov/mailman/private/star-ops-l/2022-January/006317.html)

     

     

    01/17/22

      I. RHIC Schedule

      II. Recap

    • sTGC air blower AC compressor needs a fix, whenever there is access next time. (https://online.star.bnl.gov/apps/shiftLog/logForEntry.jsp?ID=63692). Confusingly this is under the eTOF in the AH, but this one is serving the sTGC and FCS. The eTOF shares the TPC blower this year.
    • Work on the air blower is happening this morning
    • We can probably run without the error blower. The sTGC LV will drop if it hits a limit (currently 80F). Without the compressor we're still generally under this limit, but we're close (~77F).
    • Many issues with the trigger, as we discussed yesterday. bbc errors from the daqlog will be suppressed so that the operators should never see those ErrCnt messages. Not all events with BBC errors make it to L2, so not all are counted for the 200-error run stop limit. That's how we've been going beyond 200. Crews do not need to stop runs for this. Hank might get Chris to increase this limit from 200.
    • BBC issue might affect analyses that use the earliest TAC information. The TAC difference is from the earliest TAC, so the information is there. This should affect a very small fraction of events, but it might need to be considered by future analyzers (or maybe even the production?)

     

      III. Open issues/status

    • air blower status
    • sTGC critical plot information
    • rotator study (schedule not fixed)
    • Detector readiness checklist needs updating with new BEMC instructions and new BBC instructions

     More detailed discussion can be found on the ops list (https://lists.bnl.gov/mailman/private/star-ops-l/2022-January/006326.html)

     

     

    01/16/22

      I. RHIC Schedule

      II. Recap

    • Updates on BBC instructions "Slow DMA error". Runs are going beyond 200 errors, which is a surprise.
    • TPC ran at near 100% dead from about 2200-0030. TThe node causing the problems was TPX35. We tried removing the node and the errors went away. I then tried masking the boards in these run (sector 21 RB 5&6 and sector 22 RB 5&6 respectively). The errors persisted whichever boards were masked. We then rebooted the tpx35 computer and the errors stopped. (Summary Report - Night Shift). We may need mass computer reboots during some down time (only ~10min)
    • Scalar board 6 is having unknown issues. It was out from run 29 yesterday to run 44. It's out again today.
    • We'll get some more detailed instructions on when to reapply voltage to the BEMC

     

      III. Open issues/status

    • sTGC shift crew manual
    • Instructions need to be updated for BEMC, BBC crate, and the detector readiness checklist.

     More detailed discussion can be found on the ops list (https://lists.bnl.gov/mailman/private/star-ops-l/2022-January/006303.html)

     

     

    01/15/22

      I. RHIC Schedule

      II. Recap

    • Power dip in the afternoon, it took some time to get everything back.
    • Several accesses yesterday. Tim had to go in late to work on the TCIM.
    • One very-short fill overnight. Poor quality beam. High emittance and low polarization.
    • The ZDC scalar rate is not correct, while MCR use ZDC rate to tune the beam. J.H Lee has called MCR to use BBC rate instead of ZDC rate to tune the beam. (https://online.star.bnl.gov/apps/shiftLog/logForEntry.jsp?ID=63575)
    • This ZDC problem came from a threshold change on a NIM crate after the power dip that depressed the values ~50% (can be seen by comparing to the BBC values). 
    • RICH scalars were out after the dip.
    • The sTGC crate issue seems to have been solved.

     

      III. Open issues/status

    • sTGC shift crew manual
    • Still talk of 2 beta squeezes
    • Plans for low lumi runs next week (Tuesday?)

     More detailed discussion can be found on the ops list (https://lists.bnl.gov/mailman/private/star-ops-l/2022-January/006286.html)

     

    01/14/22

      I. RHIC Schedule

      II. Recap

    • Injection plan is tentatively for midnight. Until then zero-field cosmics.
    • We're turning off the magnet now (ramping to zero current, no state change). Will be turned on ~midnight.
    • Access can be done. Tim went in yesterday to replace a flash card. Today the plan is Tim will replace an sTGC raspberry pi, Christian will replace two failed DEP boards for the FCS, and Prashanth will go in for a quick inspection.
    • Oleg will switch the ECal and work on the system after the DEP board replacement.
    • Jeff fixed instability with Configuration speedup. There should be a significant speedup for run starts now.
    • ^ We will change to 30 minute runs.
    • Tonko made a change in the ITPC processing in case of auto-recovery failure:

      Before) If the auto-recovery fails 4 times I force-stop the run with an appropriate message.

      Now) If the auto-recovery fails 4 times I raise iTPC BUSY with an appropriate message but I DON'T force-stop the run. In this case the forward program continues and it gives the Shiftcrew some time to figure things out.
      Crews are not expected to clear this busy.

    • A brown out this morning seems to have taken out a few sectors in the TPC and nothing else.
    • Carl has written detector requirements for the triggers in case, thus no detector -> disabled triggers. This email has been circulated to the trigger board and (will be) to the ops list. If you have expertise or opinions, review this.
    • Elke wants detector experts to get ready for the low luminosity runs. This is just a reminder.

     

      III. Open issues/status

    • sTGC shift crew manual
    • sTGC LV still has some issues

     More detailed discussion can be found on the ops list (https://lists.bnl.gov/mailman/private/star-ops-l/2022-January/006275.html)

     

     

    01/13/22

      I. RHIC Schedule

      II. Recap

    • The collider will switch from the Siemens to the Westinghouse injector, which will reduce the quality of our fills.
    • Took out BBC from the triggers, as we always reached 200 BBC errors "Slow DMA error" (https://online.star.bnl.gov/apps/shiftLog/logForEntry.jsp?ID=63341) -- BBC not configured for the next 12 hours
    • Multiple problems with the FCS and FST stopping runs
    • We need some way to communicate to the shift leaders about what to do after repeated errors.
    • Jeff will work on run control during CeC
    • We should take advantage of the low luminosity fills from the Westinghouse generator to do some work which should be detailed later.
    • Tim will go in to check the LED control box for HCAL after the beam dump

     

      III. Open issues/status

    • No injection into RHIC, no beam
    • Daniel wants to do a few quick runs for the sTGC (one with and one without beam)

     More detailed discussion can be found on the ops list (https://lists.bnl.gov/mailman/private/star-ops-l/2022-January/006267.html)

    01/12/22

      I. RHIC Schedule

      II. Recap

    • Access right now (until 11am) -> 5 hours of CeC (4pm) -> Physics
    • BCE had to be power cycled because the 0x1d000000 board failed to configure.
    • The BBC hit 200 errors and was out of the trigger for the rest of the night.
    • Hongwei reported that he thinks he fixed the issue of l4 hanging run stop requests.
    • Jeff is working on making run start faster. Perhaps we'll switch to 30 min runs tomorrow.
    • Carl reported that the forward trigger is reading out the MTD and TOF. TOF mult may be useful, but the MTD likely is not. That should be removed.

     

      III. Open issues/status

    • Still no sTGC critical plot instructions
    • 30-minute runs soon
    • We're currently at ~ 80% of requested luminosity from RHIC

     More detailed discussion can be found on the ops list (https://lists.bnl.gov/mailman/private/star-ops-l/2022-January/006258.html)

     
     


    01/11/22

      I. RHIC Schedule

      II. Recap

    • Jeff: As per trigger board meeting adjusted rates in production_pp500_2022:

      Write ADC from every 101 events to every 201 events
      JP0 - removed trigger
      JP1 - reduce rate from 70 -> 35hz
      BT0 - reduce rate from 180 -> 100hz
      dimuon - reduce rate from 300 -> 250hz
      (https://online.star.bnl.gov/apps/shiftLog/logForEntry.jsp?ID=63133)

    • Magnet trip at ~6:42pm. Unfortunately the eTOF LV was forgotten again. This is a difficult thing to communicate.
    • Jeff added the logscales for the trigger plots, added l4_prim_nDedx/nHits plots to the primary tracks tab, disabled the BES & fixed target tabs in the l4 display, and added sTGC to the 'critical plots' tab.
    • EEMC was out all night
    • DAQ went 100% dead last night. Tonko came on and helped the crew fix the problem after some confusion (https://online.star.bnl.gov/apps/shiftLog/logForEntry.jsp?ID=63187). JP1 and JP2 were firing at 10 MHz because of the EEMC. Run came back to normal after removing these triggers.
    • The EEMC was fixed at ~9am.
    • Carl is working on a calorimeter matrix so that clicking out the EEMC will disable these triggers (among other such trigger/subsystem connections)
    • Chris implemented a gain correction for the QT boards.
    • Tonko proposed a color change for the BBC errors in the DAQ monitor, so that they don't drown out other errors.
    • Jeff moved some ethernet cables, increasing the data speed by ~15%. There are a few cables that might still be moved.

     

      III. Open issues/status

    • sTGC LV is constantly disconnecting. The solution to this is still not known.
    • We need sTGC critical shift plot instructions for the shift crew
    • We need higher luminosity
    • Daniel's 2 5-minute sTGC runs

     More detailed discussion can be found on the ops list (https://lists.bnl.gov/mailman/private/star-ops-l/2022-January/006245.html)

     
     

     

    01/10/22

      I. RHIC Schedule

    • TBD

      II. Recap

    • production_pp500_2022: trg + daq + tpx + itpc + btow + etow + esmd + tof + mtd + gmt + fcs + stgc + fst + l4
    • zdcPolarimetry_2022: trg+daq
    • CosmicLocalClock_FieldOn: trg + daq + itpc + tpx + btow + etow + esmd + tof + gmt + mtd + l4 + fcs + stgc + fst
    • Communication with MCR about luminosity should probably be about ZDC and (without killer bit) rates. MCR has more rates to look at, but we want the communication to be clear. Angelika will make something like Jamie's singles-corrected value for them to look at too.
    • Daniel wants to do a short sTGC test today - only 5 min run x2
    • Tonko recoded the ETOF TCD so that it fires the ETOF whenever TOF is in the run and gets a trigger.
    • MTD bias scan: 23009043 (13.1 kV), 23009044 (13.2 kV), 23009045 (13.3 kV), 23009046 (13.4 kV), 23009047 (13.5 kV), 23009048 (13.6 kV), 23009049 (12.8 kV).
    • Jeff and Tonko are working on speeding up the DAQ. Jeff will spend a few hours during the CeC today about this.
    • We've had a lot of L4 crashes at run stop. The run won't actually stop until L4 is rebooted. Some events might be too big for L4 and the buffer may need an increase. Jeff and Hongwei are working on this.

     

      III. Open issues/status

    • sTGC LV is constantly disconnecting. The solution to this is still not known.
    • We need to cull some QA plots
    • We need sTGC plots in the critical shift plots (this is underway)
    • We need sTGC and FST critical shift plot instrucitons for the shift crew
    • We need higher luminosity
    • l4 crashing at run end
    • Daniel's 2 5-minute sTGC runs

     More detailed discussion can be found on the ops list (https://lists.bnl.gov/mailman/private/star-ops-l/2022-January/006222.html)

     

    01/09/22

      I. RHIC Schedule

    • Physics until 0800 Monday.

      II. Recap

    • production_pp500_2022: trg + daq + tpx + itpc + btow + etow + esmd + tof + mtd + gmt + fcs + stgc + fst + l4
    • zdcPolarimetry_2022: trg+daq
    • CosmicLocalClock_FieldOn: trg + daq + itpc + tpx + btow + etow + esmd + tof + gmt + mtd + l4 + fcs + stgc + fst
    • Tim replaced a control card for sTGC LV crate #2 during access at the end of a fill. (https://online.star.bnl.gov/apps/shiftLog/logForEntry.jsp?ID=62992)
    • MCR had some issues leading to low luminosity fills and they haven't really managed to hit the luminosity they had a few days ago.
    • MCR might be able to give us two beta squeezes/fill.
    • sTGC LV lost communication frequently. Every time when we need to change the status we will have to restart the sTGC LV IOC and re-build connection. But the communication would be lost again within 5-10 minutes. (https://online.star.bnl.gov/apps/shiftLog/logForEntry.jsp?ID=63042). This seems to be ongoing. It isn't stopping data taking, but it is an inconvenience for the shift crew. Once Tim thinks he understands it the fix will likely require an access.
    • Bill has proposed a sort of triangle cut to remove high-rapidity clusters in the TPC to help the DAQ. The number of primary tracks looks reasonable, but the number of globals is huge. He will send out a document on this to be discussed during the triggerboard meeting (Monday 1pm). It may need some additional information from the TPC group and the spin PWG.
    • The trigger group has implemented code to skip these problematic BBC issues. This should ease the issues it causes for STAR. They're still working on the source.
    • We have a lot of outdated QA plots. If you are/were a subsystem expert please check (see https://lists.bnl.gov/mailman/private/star-ops-l/2022-January/006211.html)
    • Tuesday will be cold. This might be a concern for the sTGC gas vent.

     

      III. Open issues/status

    • sTGC LV is constantly disconnecting
    • We need to cull some QA plots
    • We need higher luminosity

     More detailed discussion can be found on the ops list (https://lists.bnl.gov/mailman/private/star-ops-l/2022-January/006204.html)

     

    01/08/22

      I. RHIC Schedule

    • Physics until 0800 Monday.

      II. Recap

    • production_pp500_2022: trg + daq + tpx + itpc + btow + etow + esmd + tof + mtd + gmt + fcs + stgc + fst + l4
    • zdcPolarimetry_2022: trg+daq
    • CosmicLocalClock_FieldOn: trg + daq + itpc + tpx + btow + etow + esmd + tof + gmt + mtd + l4 + fcs + stgc + fst
    • Yesterday MCR desteered the beams to check how much of our background was coming from single beams. Angelika: "The result from our quick separation test earlier today was that the BBC singles rates came down to 25-25 kHz with full separation (1.8mm). This corresponds to 0.24-0.32% of the singles rates at full overlap. This does not look like background (consistent with it not responding to collimator settings) and appears to be associated with collisions."
    • ^ Thus we cannot collimate out the background. There is ongoing discussion why the BBC singles rates are higher than in 2017.
    • Special FCS gain calibration runs taken yesterday afternoon (23007067, 68, 69, 75, and 76). We've requested production for these.
    • We asked RHIC a few days ago to reduce the luminosity ~20% to ~250k ZDC and rates. After this meeting JH requested we go back up to ~300k. We plan on asking for ~20% higher again in the future.
    • We would like 2 beta squeezes/fill. JH will ask about this.
    • Jeff set approximate scalar ranges for the DAQ monitoring page. Bright red means the scalar rate for the trigger is too high. Black means the scalar rate for the trigger is about right. Brown means the scalar rate for the trigger is too low. These are only preliminary numbers. (https://online.star.bnl.gov/apps/shiftLog/logForEntry.jsp?ID=62904)
    • Geary would like to do an HV scan for the MTD. This would comprise 6 20-min runs. These are normal physics runs, they'll just have different MTD biases.
    • It looks like any issues with the TPC automatic restart from Jeff and Tonko were ironed out. This will save the last DAQ restart from clusters of runs that cannot start because of a TPC RDO.
    • Hongwei fixed some issues that looked like L4 was stopping the run. Jeff is still working on an issue where a window will pop up saying "configuration not sent" (I think the quote is right) after everything configures and will require a run restart.

     

      III. Open issues/status

    • FCS end-of-fill runs
    • 2 beta squeezes/fill request
    • We will start requesting more luminosity

     More detailed discussion can be found on the ops list (https://lists.bnl.gov/mailman/private/star-ops-l/2022-January/006174.html)

     

     

    01/07/22

      I. RHIC Schedule

    • Physics until 0800 Monday.

      II. Recap

    • production_pp500_2022: trg + daq + tpx + itpc + btow + etow + esmd + tof + mtd + gmt + fcs + stgc + fst + l4
    • zdcPolarimetry_2022: trg+daq
    • CosmicLocalClock_FieldOn: trg + daq + itpc + tpx + btow + etow + esmd + tof + gmt + mtd + l4 + fcs + stgc + fst
    • MCR will do some background tests by unsteering the beam at the end of the current fill
    • We will do some FCS gain scans on either side of this desteering
    • First fill of the night had about as high luminosity as we have seen this run
    • Several problems occurred at once. At flat top the TPC cathode
         wouldn't come up, the sTGC had connection errors, and the EEMC GUI went
         white. It took a long time (1.5h?) to get the first two back. The EEMC
         wasn't back in runs for ~5h.
    • VPD got new TAC offsets. Jeff updated the trigger IDs (
         https://online.star.bnl.gov/apps/shiftLog/logForEntry.jsp?ID=62817 +
         https://online.star.bnl.gov/apps/shiftLog/logForEntry.jsp?ID=62820(5am) "
    • Tonko called in and made us enable all the triggers, more than
         half of them were not enabled since previous shifts." -- I suspect this is
         from triggers being taken out for VPD TACs
    • Daniel suggests we change the max TAC for the VPD. There will be an
         email about this. There also seems to be an issue with ch14 on the west
         VPD. This will be masked out.
    • Shuai updated the MTD timing window after the VPD was changed. We
         should change the trigger ID.

     

      III. Open issues/status

    • RHIC desteer beams to check background
    • FCS end-of-fill runs

     More detailed discussion can be found on the ops list (https://lists.bnl.gov/mailman/private/star-ops-l/2022-January/006174.html)

     

     

    01/06/22

      I. RHIC Schedule

    • CeC from now until they finish in the early afternoon. After that there are some beam tests and another hour-long access (~5 or 6pm). Then the focus is on physics until Monday morning.

      II. Recap

    • production_pp500_2022: trg + daq + tpx + itpc + btow + etow + esmd + tof + mtd + gmt + fcs + stgc + fst + l4
    • zdcPolarimetry_2022: trg+daq
    • CosmicLocalClock_FieldOn: trg + daq + itpc + tpx + btow + etow + esmd + tof + gmt + mtd + l4 + fcs + stgc + fst
    • No beam since yesterday. Yellow cavity issue overnight -> CeC took over starting around midnight.
    • Cosmic data taken around midnight was triggering at an extremely high rate due to TOF sector 3. Hank wonders if this is due to the BBC-MIX CPU swap.
    • Jeff and Tonko implemented the TPC autorecovery
    • Carl has a proposal for taking four 30-minute FCS runs at the end of a fill. Hopefully we can do this tomorrow.
    • We will get a new configuration for Tonko to consistently work on the TPC FEEs during the day

     

      III. Open issues/status

    • No beam = no VPD timing, no test of BBC crate, no test of TPC autorecovery, etc.
    • After the splitter was taken out and put back in the VPD pedestals look okay, but they continue to be very wide. This is true on both E/W. It's not known why.
    • FCS end-of-fill runs  + configuration
    • no-TPC configuration for Tonko 

     More detailed discussion can be found on the ops list (https://lists.bnl.gov/mailman/private/star-ops-l/2022-January/006154.html)


    01/05/22

      I. RHIC Schedule

    • After access we go into physics until 0800 tomorrow morning. 0800-1600 tomorrow is CeC.
    • Long access from 0700-1700 today.

      II. Recap

    • production_pp500_2022: trg + daq + tpx + itpc + btow + etow + esmd + tof + mtd + gmt + fcs + stgc + fst + l4
    • zdcPolarimetry_2022: trg+daq
    • CosmicLocalClock_FieldOn: trg + daq + itpc + tpx + btow + etow + esmd + tof + gmt + mtd + l4 + fcs + stgc + fst
    • 11:15 yesterday access. Christian disconnected DSMI boards. Prashanth checked sTGC gas lines. I think we can do a bit better at people who come in for access writing shift entries.
    • MCR called and said that they have not received the BBC delayed signals since Sunday. Tim will replace a PS on a nim crate tomorrow. This doesn't need access. I called MCR to let them know that they should get them back then.
    • We had issues with the BBC crate not being configured after TRG reboot/reboot all.
    • The CPUs for the BBC and MIX crates were swapped. 1M events were taken without error (w/out beam either). If the CPU is the culprit we'll start seeing the same issues with the MIX crate.
    • Christian tested the VPD splitter. It was found to be fine. Hopefully just reconnecting solves the problem and we can redo the timing.
    • Christian did not have time to look at the eTOF cable while the poletip was open, so this problem is not fixed.
    • sTGC HV was not put on an NPS. If there is a parasitic access opportunity Tim can do this quickly, but we won't request an access.
    • As a proof of principle Tim was able to reprogram 2 TPC RDOs, so the test discussed yesterday was a success.
    • A cable for the TPC gating grid was found to be disconnected. This was reconnected and things can go back to normal in run control.
    • The sTGC gas vent on the roof had partly frozen over. Prashanth is working on a solution, but we may see sTGC gas issues in cold weather.
       

     

      III. Open issues/status

    • We're watching the BBC and MIX crates for issues.
    • The VPD will need new runs to reset the timing
       

     More detailed discussion can be found on the ops list (https://lists.bnl.gov/mailman/private/star-ops-l/2022-January/006140.html)

     
     

    01/04/22

      I. RHIC Schedule

    • Schedule: physics beam until 0700 tomorrow. Access from 0700-1700 (01/05). Physics from 1700(01/05)-0800(01/06). CeC from 0800(01/06)-1600(01/06)
    • Long (10h -- 0700-1700) access tomorrow. Check PDF attached to (https://lists.bnl.gov/mailman/private/star-ops-l/2022-January/006107.html) for the list of tasks.

     

      II. Recap

    • production_pp500_2022: trg + daq + tpx + itpc + btow + etow + esmd + tof + mtd + gmt + fcs + stgc + fst + l4
    • zdcPolarimetry_2022: trg+daq
    • CosmicLocalClock_FieldOn: trg + daq + itpc + tpx + btow + etow + esmd + tof + gmt + mtd + l4 + fcs + stgc + fst
    • Beam energy change from 254.87 to 254.21 yesterday
    • Long (10h) access planned tomorrow.
       

     

      III. Open issues/status

    • Today we will have a 15min access for Christian to disconnect the DSMIs from the backplane of the BBC crate and for Prashanth to inspect the sTGC gas lines.
    • Jiangyong will cover day shift for the next few days followed (possibly) by Ben Schweid. Next week is unclear.
    • Tomorrow's 10am meeting will, instead, be held at 5pm BNL time.
       

     More detailed discussion can be found on the ops list (https://lists.bnl.gov/mailman/private/star-ops-l/2022-January/006107.html)

     

    01/03/22

      I. RHIC Schedule

    • 0900-1700: CeC
    • 1700 (Mon)-1600 (Tues): physics beam

      II. Recap

    • production_pp500_2022: trg + daq + tpx + itpc + btow + etow + esmd + tof + mtd + gmt + fcs + stgc + fst + l4
    • zdcPolarimetry_2022: trg+daq
    • CosmicLocalClock_FieldOn: trg + daq + itpc + tpx + btow + etow + esmd + tof + gmt + mtd + l4 + fcs + stgc + fst
    • Beam energy change from 254.87 to 254.21 yesterday
    • Long (10h) access planned on Wednesday. This includes opening the East poletip. If you would like to enter please respond to the star-ops message.
       

     

      III. Open issues/status

    • VPD max TAC not changed since yesterday
    • The Trigger group needs a 1 hour access for Christian to swap
         processors for the BBC and MIX crates. This should happen tomorrow after a
         beam dump.
    • VMEs lost connection (white in GUIs) -- This requires the main CANBUS
         to be restarted in the IR. This needs a 10 minute access for Tim to go in
         and turn it off and on. We'll wait until right after CeC (maybe 5:30?) or
         any other available time today. In the future (Wednesday?) we need an NPS
         on this.
       

     More detailed discussion can be found on the ops list (https://lists.bnl.gov/mailman/private/star-ops-l/2022-January/006078.html)

     

    01/02/22

      I. RHIC Schedule

    • 0800-1600: injector polarization work + physics
    • 1600-0800: physics beam
    • 0800-1600 (monday 1/3) CeC
    • Beam energy change from 254.87 to 254.21 planned today

      II. Recap

    • production_pp500_2022: trg + daq + tpx + itpc + btow + etow + esmd + tof + mtd + gmt + fcs + stgc + fst + l4
    • zdcPolarimetry_2022: trg+daq
    • CosmicLocalClock_FieldOn: trg + daq + itpc + tpx + btow + etow + esmd + tof + gmt + mtd + l4 + fcs + stgc + fst
    • Beam energy change from 254.87 to 254.21 planned today

     

      III. Open issues/status

    • Access yesterday at noon. sTGC was fixed by a crate powercycle (will be put on NPS in next major access) while DSM 0x17 was replaced
    • Lost a lot of time from beam aborts last night
    • Clusters of L2 issues happened again. Trigger group is looking into this again.
    • TOF gas freon bottle changeover was neglected for several hours.
    • Carl noted that the fast offline is just looking at the StPhysics stream (MinBias and HighTower) and not at jet events or the forward stream. I'll contact them.

     More detailed discussion can be found on the ops list (https://lists.bnl.gov/mailman/private/star-ops-l/2022-January/006057.html)

    01/01/22

      I. RHIC Schedule

    • 0800-1600: injector polarization work + physics
    • 1600-0800: physics beam

      II. Recap

    • production_pp500_2022: trg + daq + tpx + itpc + btow + etow + esmd + tof + mtd + gmt + fcs + stgc + fst + l4
    • zdcPolarimetry_2022: trg+daq
    • CosmicLocalClock_FieldOn: trg + daq + itpc + tpx + btow + etow + esmd + tof + gmt + mtd + l4 + fcs + stgc + fst
    • After midnight (year change) the DAQ monitor messages, the JEVP plots, and all of the scalar algorithms went down. The DAQ monitor messages came back with Tonko ~6am and the servers for the plots weren't restarted until ~7:45am. Jamie messaged Jinlong about the scalars.
    • BBC DSM board stopped many runs, even after yesterday's access to replace a DSM board.
    • sTGC lost connection to a HV board, this needs access.
    • Access at 12pm today for both systems
    • Typically alarms are masked when detectors are ramping. Given the recent issues of HV going down for both the sTGC and TOF without alarms David has proposed removing these masks.
    • Tonko needs a procedure every day to take time to bring back masked RDOs. He also needs accesses periodically to fix ones that cannot be brought back via software.

     

      III. Open issues/status

    • Access today at 12pm to replace BBC crate 0x17 board and check sTGC HV board
    • Tonko's time to bring back RDOs. In the short term we'll just stop the run for him to do this. In the long term we'll get a special configuration for forward measurements not including TPC.

     More detailed discussion can be found on the ops list (https://lists.bnl.gov/mailman/private/star-ops-l/2022-January/006066.html)

     

    12/31/21

      I. RHIC Schedule

    • 0800-1600 - injector polarization work, 1600-0800 collision  for physics.

      II. Recap

    • production_pp500_2022: trg + daq + tpx + itpc + btow + etow + esmd + tof + mtd + gmt + fcs + stgc + fst + l4
    • zdcPolarimetry_2022: trg+daq
    • CosmicLocalClock_FieldOn: trg + daq + itpc + tpx + btow + etow + esmd + tof + gmt + mtd + l4 + fcs + stgc + fst
    • BBC DSM board issue was a really bad problem starting yesterday
         afternoon. We'll plan on a 15-minute access when Akio/Christian are ready
         to go in and replace the board. This can be done in the 0800-1600 block. It
         looks like it will be 1:15pm.
    • Many TPC errors hurt running
    • L4 had a few issues that Hongwei fixed.
    • We're planning a beam energy change (254 GeV beams) for better
         polarization perhaps on Tuesday (01/04).
    • Polarization: B~40% Y ~50% until some time in the overnight shift when
         blue suddenly dropped to 11% (
         https://online.star.bnl.gov/apps/shiftLog/logForEntry.jsp?ID=62215).
         This is primarily due to them changing the fill pattern at the end of the
         fill, and was not actually representative of the polarization. Elke has
         mentioned that the offline analysis isn't nearly so bad (~40%?)

     

      III. Open issues/status

    • Access at 12:30 for the BBC VME crate

     More detailed discussion can be found on the ops list (https://lists.bnl.gov/mailman/private/star-ops-l/2021-December/006046.html)

     

    12/30/21

      I. RHIC Schedule

    • Physics beam all day

      II. Recap

    • production_pp500_2022: trg + daq + tpx + itpc + btow + etow + esmd + tof + mtd + gmt + fcs + stgc + fst + l4
    • zdcPolarimetry_2022: trg+daq
    • CosmicLocalClock_FieldOn: trg + daq + itpc + tpx + btow + etow + esmd + tof + gmt + mtd + l4 + fcs + stgc + fst
    • iTPC caused many issues last night
    • BBC DSM board stopped one run, but not many like before.
    • Several hours of sTGC data were lost from confusion on shift.

     

      III. Open issues/status

    • Does iTPC need some change so that it doesn't stop so many runs in preparation and doesn't hang the system?
    • BBC DSM board issue is hard to understand, but trigger group may replace the board or even the CME CPU

     More detailed discussion can be found on the ops list (https://lists.bnl.gov/mailman/private/star-ops-l/2021-December/006028.html)


    12/29/21

      I. RHIC Schedule

    • Work on polarimetry in the afternoon (~4pm).
    • Physics at night (~midnight).

      II. Recap

    • production_pp500_2022: trg + daq + tpx + itpc + btow + etow + esmd + tof + mtd + gmt + fcs + stgc + fst + l4
    • zdcPolarimetry_2022: trg+daq
    • CosmicLocalClock_FieldOn: trg + daq + itpc + tpx + btow + etow + esmd + tof + gmt + mtd + l4 + fcs + stgc + fst
    • Switched to 45 min runs
    • Laser back to every 4 hours (from 2h cycle) since the TPC methane pressure back to (or slightly higher than) normal.

      III. Tasks for commissioning

    •  sTGC voltage scan planned for tomorrow morning when Daniel and Tonko are both awake

      IV. Open issues/status

    • L2 (which is really the BBC DSM board) stopped many runs in evening and overnight. Hank + trigger group will look into it.
    • sTGC caused many issues in the evening perhaps related to a loss of connection to a raspberry pi
    • crate 7, board 4 of the BTOW complains in run control. This can be ignored, but Tonko will stop it to avoid confusion.
    • L4 seems to be aborting VPD TACs (https://lists.bnl.gov/mailman/private/star-ops-l/2021-December/006006.html). The rates don't make sense. This is being looked into.
    • Polarizations looked to be about B ~ 30, Y ~ 50 last night.

     More detailed discussion can be found on the ops list (https://lists.bnl.gov/mailman/private/star-ops-l/2021-December/006016.html)




    12/28/21

      I. RHIC Schedule

        Work on blue injection during the day to prevent increase in emittance
        Collisions later afternoon and overnight
        Maintenance day is rescheduled to Jan 5th, no planned access tomorrow

      II. Recap

        production_pp500_2022: trg + daq + tpx + itpc + btow + etow + esmd + tof + mtd + gmt + fcs + stgc + fst + l4
        zdcPolarimetry_2022: trg+daq
        CosmicLocalClock_FieldOn: trg + daq + itpc + tpx + btow + etow + esmd + tof + gmt + mtd + l4 + fcs + stgc + fst

      III. Tasks for commissioning

        a) Local polarimetry

        b) sTGC noise and HV scan and FST HV scan finished yesterday

        c) MTD HV scan, after avalanche/streamer analysis

        d) VPD splitter board (Christian, maintenance day)

        e) MTD dimuon trigger, prod id, trigger patch recovery at maintenance day

      IV. Open issues

        a) FCS Mpod slot-1 looks dead, no alarm for LV

        b) Eemc-pwrs1 NPS has a network interface failure, spare is available with NEMA 5-20 plug (maintenance day, Wayne)

        c) Gating grid sector 21 outer disconnected, RDOs masked out, relevant anodes at 0 V, January 4th and 5th

      V. Plan of the day/Outlook

        a) Work on blue injection during the day

        b) Collisions later afternoon and overnight

        c) ETOF by expert operation

    12/27/21

      I. RHIC Schedule

        Diagnostic for quench detector and ramp development during the day
        Snake settings to compensate for partial snake
        Collisions for physics with store-to-store change in emittance in the afternoon and collisions overnight

      II. Recap

        production_pp500_2022: trg + daq + tpx + itpc + btow + etow + esmd + tof + mtd + gmt + fcs + stgc + fst + l4
        zdcPolarimetry_2022: trg+daq

      III. Tasks for commissioning

        a) Local polarimetry

        b) FCS gain calibration, full FastOffline for HCal

        c) sTGC noise thresholds, 2-3 hours without beam when possible, HV scan with beam (call Daniel + Prashanth)

        d) FST HV scan, sw update without beam, call Xu when physics, together with sTGC, dedicated production configuration

        e) MTD HV scan, after avalanche/streamer analysis

        f) VPD splitter board (Christian, maintenance day 29th, Daniel to be notified)

        g) MTD dimuon trigger, prod id, trigger patch recovery at maintenance day

      IV. Open issues

        a) Leaking valve replaced for TPC gas, methane concentration from 9% to nominal 10% over these two days, more frequent laser runs (2 hours)

        b) TPX automatic power-cycling, ongoing

        c) Eemc-pwrs1 NPS has a network interface failure, spare is available with NEMA 5-20 plug (maintenance day 29th, Wayne)

        d) Gating grid sector 21 outer disconnected, RDOs masked out, relevant anodes at 0 V, January 4th and 5th

      V. Plan of the day/Outlook

        a) Ramp development during the day

        b) Collisions with emittance changes store-to-store later afternoon and collisions overnight

    12/26/21

      I. RHIC Schedule

        Slower ramp rate (x5) due to problem with quench detectors, work scheduled for tomorrow
        Ramp development during the day
        Collisions afternoon with intensity steps and overnight

      II. Recap

        production_pp500_2022: trg + daq + tpx + itpc + btow + etow + esmd + tof + mtd + gmt + fcs + stgc + fst + l4
        zdcPolarimetry_2022: trg+daq

      III. TPC gas

        a) Fluctuations in PI8 and CH4-M4 since yesterday afternoon, interlock overnight

      IV. Tasks for commissioning

        a) Local polarimetry

        b) sTGC noise thresholds, 2-3 hours without beam when possible, HV scan with beam (Daniel + Prashanth)

        c) MTD HV scan, after avalanche/streamer analysis

        d) VPD splitter board (Christian, maintenance day 29th, Daniel to be notified)

        e) MTD dimuon trigger, prod id, trigger patch recovery at maintenance day

      V. Open issues

        a) Temperature increase in WAH, yellow alarms for several VMEs

        b) Gating grid sector 21 outer disconnected, RDOs masked out, relevant anodes at 0 V, January 4th and 5th

      VI. Plan of the day/Outlook

        a) Ramp development during the day, also stores for physics, MCR will inform

        b) Collisions with intensity steps afternoon and overnight

     

    12/25/21

      I. RHIC Schedule

        Collisions for commissioning
        Energy scan shall resume on 12/26

      II. Recap

        production_pp500_2022: trg + daq + tpx + itpc + btow + etow + esmd + tof + mtd + gmt + fcs + stgc + fst + l4

      III. Updates

        a) BEMC PMT trips

        b) Set of triggers elevated to physics (entry for Run 22359013)

      IV. Plan of the day/Outlook

        a) Collisions for commissioning

        b) Energy scan tomorrow 12/26

    12/24/21

      I. RHIC Schedule

        Energy scan was interrupted by QLI in blue and power dip (2 out of 6 points done), access ongoing for recovery from the quench (~4 hours)
        Collisions afternoon, intensity steps, and overnight
        Energy scan shall resume on 12/26

      II. Recap

        zdcPolarimetry_2022: trg+daq for part of energy scan

      III. Tasks for commissioning

        a) Local polarimetry

        b) FCS gain calibration, FastOffline finished, ECal ok (pi0), HCal

        c) sTGC noise thresholds, 2-3 hours without beam when possible, HV scan with beam (Daniel + Prashanth)

        d) MTD HV scan, after avalanche/streamer analysis

        e) VPD splitter board (Christian, maintenance day 29th, Daniel to be notified)

        f) MTD dimuon trigger, prod id, trigger patch recovery at maintenance day

      IV. Open issues

        a) Gating grid sector 21 outer disconnected, RDOs masked out, relevant anodes at 0 V, January 4th and 5th

        b) TPX automatic power-cycling

        c) Readiness and detector states

      V. Plan of the day/Outlook

        a) Access ongoing

        b) Collisions afternoon, intensity steps and overnight

        c) Energy scan 12/26, call Ernst

    12/23/21

      I. RHIC Schedule

        Energy scan, low intensity, afternoon: intensity steps, overnight: collisions

      II. Recap

        Collisions with 111x111 bunches, production_pp200_2022: trg + daq + tpx + itpc + btow + etow + esmd + tof + mtd + gmt + fcs + stgc + fst + l4
        zdcPolarimetry_2022: trg + daq

      III. Tasks for commissioning

        a) Local polarimetry, scan will start later because of a quench (11:30 EST)

        b) Spin direction at STAR, longitudinal in blue is a part of systematic error

        c) Scaler bits timing ok now

        d) FCS gain calibration

        e) sTGC noise thresholds, 2-3 hours without beam when possible, HV scan with beam (Daniel + Prashanth)

        f) MTD HV scan, after avalanche/streamer analysis

        g) FastOffline, new request for FCS finished

        h) VPD splitter board (Christian, maintenance day 29th, Daniel to be notified)

        i) MTD dimuon trigger, prod id, trigger patch recovery at maintenance day

      IV. Open issues

        a) BTOW LV + FCS LV alarm, minor -> major for channel trip

        b) sTGC LV

        c) Gating grid sector 21 outer disconnected, RDOs masked out, relevant anodes at 0 V, January 4th and 5th

        d) TPX automatic power-cycling

        e) Mailing lists to inform about any changes + logbook

        f) BTOW PMT recovery when opportunity for access, call Oleg (daytime/evening)

        g) Readiness and detector states

        h) ZDC-SMD pedestal for west horizontal #4

      V. Plan of the day/Outlook

        a) Energy scans, ZDC polarimetry, all detectors for machine commissioning

        b) Collisions overnight

    12/22/22

      I. RHIC Schedule

        Vernier scan, cross section compatible with run 17, energy and squeeze ramps
        Local component in blue beam, possibilities include use of existing snakes or phenix rotator, orbit imperfection tuning and energy scan

      II. Recap

        Collisions with 111x111 bunches, production_pp200_2022: trg + daq + tpx + itpc + btow + etow + esmd + tof + mtd + gmt + fcs + stgc + fst + l4
        Polarimetry runs zdcPolarimetry_2022: trg + daq

      III. Tasks for commissioning

        a) Local polarimetry

        b) Scaler bits timing

        c) Trigger for Vernier scan

        d) FCS gain calibration

        e) sTGC data volume

        f) sTGC noise thresholds

        g) MTD gas, more SF6, HV scan, trigger config

        h) FastOffline re-running to include EPD

        i) VPD splitter board (Christian, maintenance day 29th, Daniel to be notified)

        j) MTD dimuon trigger, prod id, trigger patch recovery at maintenance day

      IV. Open issues

        a) Temperature in WAH

        b) Gating grid sector 21 outer disconnected, RDOs masked out, relevant anodes at 0 V

        c) Anode HV for sector 15, channel 3 at 1000 V as default

        d) TPC Chaplin frozen (gui available also on sc3 or on alarm handler)

        e) Eemc-pwrs1 NPS which has a network interface failure and affects access to eemc-spin, spare is available with NEMA 5-20 plug (Wayne)

        f) BEMC critical plots checked by shift crews (holds in general)

        g) Reference plots for critical plots

        h) SL on star-ops list

      V. Plan of the day/Outlook

        a) Scans related to longitudinal component and intensity steps during the day

        b) Collisions overnight

    12/21/21

      I. RHIC Schedule

        9 MHz RF cavity adjusted, can go to full intensity, alignment for yellow abort kicker, IPMs configured
        Snake current increased from 300 to 320 A, blue polarization improved to ~42%
        Stores during the day with intensity steps and overnight

      II. Recap

        Collisions with 111x111 bunches, production_pp200_2022: trg + daq + tpx + itpc + btow + etow + esmd + tof + mtd + gmt + fcs + stgc + fst + l4
        Polarimetry runs zdcPolarimetry_2022: trg + daq
        Run with 0 V at TPC 21 outer, 400 V after that

      III. Tasks for commissioning

        a) FCS rates, x10 - 20 higher, tests runs with change in gain/masks, beam position?

        b) sTGC data volume, firmware update

        c) Local polarimetry, spin angle

        d) FastOffline re-running to include EPD

        e) VPD splitter board (Christian, maintenance day 29th, Daniel to be notified)

        f) MTD dimuon trigger, prod id, trigger patch recovery at maintenance day

        g) BEMC tolerable tripped boxes, 1 out till 29th, DOs follow procedure to recover, run flag as questionable, note in shift log (specific for crate)

        h) Vernier scan, low number of bunches

      IV. Open issues

        a) Gating grid sector 21 outer disconnected, 12h min + risk of need to remove parts in front, maintenance 29th, RDOs masked out, meeting today 3:30pm

        b) Eemc-pwrs1 NPS which has a network interface failure and affects access to eemc-spin, spare is available with NEMA 5-20 plug (Wayne)

      V. Plan of the day/Outlook

        a) Stores during the day with intensity steps

        b) Collisions overnight

    ­­­12/20/21

      I. RHIC Schedule

        Blue snake re-wired for correct polarity (coil #3)
        Timing alignment for abort kicker in yellow beam
        Access at 10am for 9 MHz cavity
        Ramp development after the access, then collisions after 5pm till tomorrow day

      II. Recap

        Collisions with 111x111 bunches, production_pp200_2022: trg + daq + tpx + itpc + btow + etow + esmd + tof + mtd + gmt + fcs + stgc + fst + l4
        Polarimetry run zdcPolarimetry_2022: trg + daq
        Blue polarization at 30%

      III. Tasks for commissioning

        a) FCS closing

        b) ZDC-SMD hot channel, daughter card to be replaced (Christian)

        c) Local polarimetry, scaler bits (Hank, Chris)

        d) FastOffline completed for previous 3 stores

        e) VPD splitter board (Christian, maintenance day 29th, Daniel to be notified)

        f) MTD dimuon trigger, prod id, trigger patch recovery at maintenance day

      IV. Open issues

        a) Increase in magnet current, east ptt, Monday morning

        b) Gating grid sector 21 outer disconnected, 12h min + risk of need to remove parts in front, maintenance 29th, RDOs masked out, meeting to determine the risks tomorrow

        c) Eemc-pwrs1 NPS which has a network interface failure and affects access to eemc-spin, spare is available with NEMA 5-20 plug (Wayne)

      V. BEMC operation

        a) Shift crew should star looking at critical plots, they are the same for BTOW as last many years. 2d hit map is main indicator of HV status. There are four boxes tripped since probably Sat., this was not noticed.

        b) Det. operator please don't hit wrong button in HV GUI, that can lead to a long recovery of HV, as it was today ~3 hours.

        c) For operation instruction:

            (a) during long downtime shift should run btow_ht configuration just to check HV was not tripped, looks like during Sat. evening shift no one exercise the system at all.

            (b) given that recovering one PMT box may lead to trip and then long recovery of entire BEMC HV, we better not to do such thing  during overnight shifts for example. Instead, may be, barrel jet triggers should be disabled, and live HT triggers only. Then recover HV between fills?

      VI. Plan of the day/Outlook

        a) Access 10am, beam development after

        b) Collisions after 5pm

    12/19/21

    I.RHIC Schedule for today-tomorrow

    Ramp-up intensity (up to 1.5*10^11) (limited by yellow RF)

    (partial) blue snake ramp-up 

    Collisions with luminosity likely with blue+yellow snakes overnight (111 bunches)

     

    II.Recap

     

    Collisions 111 bunches since 2am, production_pp200_2022: trg + daq + tpx + itpc + btow + etow + esmd + tof + mtd + gmt + fcs + stgc + fst + l4

    Polarimetry run zdcPolarimetry_2022: trg + daq

    Abort gap at 2/8

    Intensity ~1*10^11 

    BBC/VPD/ZDC :  0.9 / 0.4 / 0.07M  

    ~55% polarization for yellow  ~0% for blue 

     

    Access: ZDC scaler / TCMI (Zhangbu,Tim) – fixed  

                  ZDC SMD E-V 2 hot channel (Aihong) - ongoing 

     

    III.Tasks for commissioning

    a)          Detector performance at higher luminosity / issues

    b)         Any issues with “Beam loss”? (6:43 am) 

    c)          Trigger rates vs beam (ex: BHT3 rate lower ~x2 vs Run17) 

    d)         ZDC SMD hot channel 

    e)          Local polarimetry

    f)           FCS closing Monday morning?

     

    IV.Open issues

    a)         Increase in magnet current, east ptt, Monday morning

    b)        NPS for BC1 for 208V, power cord over two racks (Tim)

    c)         Gating grid sector 21 outer disconnected, 12h min + risk of need to remove parts in front, maintenance 29th, RDOs masked out

    d)          Eemc-pwrs1 NPS which has a network interface failure and affects access to eemc-spin, spare is available with NEMA 5-20 plug (Wayne)

     

    V.Plan of the day/Outlook

    a)             Ramp development (intensity, snake) during the day

             b)             Collisions (run production) in owl shift

     

    12/18/21

      I. RHIC Schedule

        Ramps with higher intensity, abort gaps to be aligned, work for UPS for blue RF 9 MHz cavity
        Collisions with larger luminosity overnight (111 bunches)
        Tomorrow: Snake ramp up, intensity recommissioning, polarized collisions overnight

      II. Recap

        Collisions 56x56 bunches since midnight, production_pp200_2022: trg + daq + tpx + itpc + btow + etow + esmd + tof + mtd + gmt + fcs + stgc + fst + l4
        Polarimetry run zdcPolarimetry_2022: trg + daq
        60% polarization for yellow from RHIC

      III. Tasks for commissioning

        a) sTGC mapping

        b) FST status

        c) FastOffline requested for st_fwd

        d) ZDC east channel 2 on in QT, no coincidence in RICH scalers after TCIM reboot, incorrect discriminator level, access 2pm - 3pm, SMD to be checked also

        e) Local polarimetry

        f) VPD splitter board (Christian, maintenance day 29th, Daniel to be notified)

        g) FCS closing Monday if blue RF ok

      IV. Open issues

        a) BBC is ok (no trigger on previous xing on east) after power cycle to BBQ, bit check to be monitored (Akio)

        b) Increase in magnet current, east ptt, Monday morning

        c) NPS for BC1 for 208V, power cord over two racks (Tim)

        d) Gating grid sector 21 outer disconnected, 12h min + risk of need to remove parts in front, maintenance 29th, RDOs masked out

        e) Eemc-pwrs1 NPS which has a network interface failure and affects access to eemc-spin, spare is available with NEMA 5-20 plug (Wayne)

      V. Plan of the day/Outlook

        a) Ramp development during the day, access for ZDC afternoon

        b) Collisions in owl shift

    12/17/21

      I. RHIC Schedule

        Potential controlled access till 1pm, ramp development after (squeeze ramp, blue tune kicker, intensity ramp up)
        Collisions in owl shift

      II. Recap

        Collisions 12x12 bunches since 4am, sTGC and FST voltage scans with field ON, tuneVertex_2022: trg + daq + tpx + itpc + fcs + stgc + fst + l4

      III. Tasks for commissioning

        a) FST (nominal voltages as before) + sTGC voltage scan (sTGC done, 2900 V is default for now)

        b) BBC lost earliest TAC on east, EPD was used for voltage scan instead

        c) VPD splitter board (Christian, maintenance day 29th, Daniel to be notified)

        d) Local polarimetry, results west ZDC only, code issue? (Jinlong), polarimetry runs tonight

        e) FCS mapping to be checked after cable swap

      IV. Open issues

        a) Increase in magnet current, east ptt

        b) BC1 fan tray swap, no alarm when ongoing, no on/off via slow controls, NPS? (Tim, David)

        c) Gating grid sector 21 outer disconnected, anode at sector 21 outer at 800 V, RDOs are masked, capacitance consistent with cable alone, 12h min + risk of need to remove parts in front, maintenance 29th

        d) Eemc-pwrs1 NPS which has a network interface failure and affects access to eemc-spin needs to be replaced (not urgent)

        e) sTGC has no data in first run after LV power up, under investigation

        f) star-ops mailing list is slow in delivery, also other lists (stgc)

        g) AC in control room

      V. Plan of the day/Outlook

        a) Potential controlled access till 1pm, ramp development after

        b) Collisions in owl shift, production configuration (prod ids except mtd), ZDC polarimetry, FCS closing Sat/Sun

        c) Forward detectors by experts only, sTGC mapping (Daniel)

        d) Saturday: ramp development during the day, collisions in owl shift

    12/16/21

      I. RHIC Schedule

        Blue snake reconfigured for coils #1 and #3, tests for abort kicker UPS
        CeC till 8pm, beam development after
        Collisions in owl shift

      II. Recap

        No collisions because of water flow problem at beamstop, caused by incorrect orifice
        Cosmics, tune configuration

      III. Tasks for commissioning

        a) Magnet on/off? -> feedback from FST by 4pm EST

        b) FST + sTGC voltage scan, procedure will be set by magnet on or off case

        c) MTD, no dedicated commissioning run?

        d) VPD slew parameters loaded, TAC windows set, investigation ongoing for splitter board

        e) Scalers board, signals ok, more than 6 bunches needed

        f) FCS status

        g) ZDC status ok

      IV. Open issues

        a) BC1 multiple power-cycle on crate, SysReset, on/off in slow controls? Fan tray swap when possible (Tim)

        b) Gating grid sector 21 outer disconnected, anode at sector 21 outer at 800 V for no gain, fix at maintenance day

        c) Eemc-pwrs1 NPS which has a network interface failure and affects access to eemc-spin needs to be replaced (not urgent)

        d) sTGC has no data in first run after LV power up, under investigation

        e) star-ops mailing list is slow in delivery, also other lists (stgc)

      V. Resolved issues

        a) EPD mapping at the splitter

        b) Magnet monitoring ok after maintenance yesterday, alarm limits ok

      VI. Plan of the day/Outlook

        a) CeC till 8pm, beam development after

        b) Collisions in owl shift

    12/15/21

      I. RHIC Schedule

        Maintenance for CeC and blue snake re-wiring, ramp development after 4pm
        Collisions late afternoon / overnight

      II. Recap

        Collisions with 12x12 bunches with forward detectors, production_pp500_2022, tuneVertex_2022

      III. Open issues

        a) sTGC voltage scan, another scans today (Prashanth + David to be called), in sync with FST

        b) FST voltage scan, looks ok from last night, another scans today

        c) tuneVertex_2022 for sTGC and FST voltage scans, runs for target number of events + add FCS, use BBC trigger

        d) Lists of tasks for collisions from experts passed to SL

        e) FCS status, trigger list

        f) VPD one channel to be checked for max slew - mask out this one for now, TAC window, need feedback on pedestals while still in access, cabling check (Christian)

        g) EPD calibrated now

        h) Cal scan, ESMD PMT voltages updated, ETOW phase to be applied

        i) ZDC towers check ok (Tomas), signal ok

        j) One run with ZDC-SMD HV off, signal cables checked ok on side patch (Aihong)

        k) Cabling check today (Christian)

        l) Scalers board, SMD counts still at RHIC clock (Jinlong)

        m) MTD commissioning (Shuai), VPD trigger and cal needed, instructions for SL by Shuai

        n) BC1 power cycled on crate (Tim), booted ok, CAN address 73 will be set (Christian)

        o) Gating grid status (Tim), sector 21 timing

        p) Laser runs every 4 hours

        q) TAC windows for BBC, EPD, ZDC, VPD (Eleanor, Jeff), log affects, delay does not, new tier1 fixed it, readback added

        r) Magnet alarm limits

        s) Eemc-pwrs1 NPS which has a network interface failure and affects access to eemc-spin needs to be replaced (not urgent)

        t) sTGC has no data in first run after LV power up, under investigation

      IV. Resolved issues

        a) Commissioning done for: BBC, EPD, BTOW, ZDC

      V. Updates

        a) production_pp500_2022, BBC, BBCTAC, BHT3 and BHT3-L2W elevated (Jeff) + ETOW, VPD (almost)

        b) Contact Jeff when a trigger can elevate to physics

      VI. Plan of the day/Outlook

        a) Restricted access now

        b) Cosmics for gating grid, magnet up preferred

        c) Beam development after 4pm, detectors in proper safe state

        d) Collisions in the evening / overnight

        e) SL tasks shift crew based on what we're running

    12/14/21

      I. RHIC Schedule

        Damage in blue snake after power dip on Sunday evening, could use coils #1 and #3, access to rewire for these coils
        UPS was disabled for abort kicker
        Access now for kicker, snake and CeC, ramp development afternoon, collisions overnight

      II. Recap

        VPD, EPD and Cal scans
        Magnet trip yesterday evening
        Controlled access now (~4 hours)

      III. Open issues

        a) VPD commissioning (Isaac, Daniel), non-VPD trigger (Jeff), slew test with beam

        b) EPD commissioning (Rosi)

        c) ZDC SMD bits in scalers fire at RHIC clock (9.38 MHz), test with HV off, pedestal issue, cabling (Jinlong + Hank)

        d) ZDC commissioning (Tomas, Zhangbu), signal seen, work for 1n peak

        e) Cal scan (Oleg, Will J), BTOW 4ns shift, crate-by-crate scans

        f) MTD commissioning (Shuai), VPD trigger and cal needed, instructions for SL by Shuai

        g) Local polarimetry (Jinlong)

        h) BC1 crate off? fails during boot, spot crash in startup file, power-cycle now (Tim)

        i) Spike in 1st gating grid time bin (David), test now with cosmics

        j) TAC windows for BBC, EPD, ZDC, VPD (Eleanor, Jeff), test today, log affects, delay not

        k) Eemc-pwrs1 NPS which has a network interface failure and affects access to eemc-spin needs to be replaced (not urgent)

        l) sTGC has no data in first run after LV power up, under investigation

      IV. Resolved issues

        a) Commissioning done for: BBC

        b) Cards in EQ1, EQ2 and EQ3 replaced yesterday (Christian)

      V. Updates

        a) Separate trigger configurations for commissioning (Jeff)

        b) File stream name for forward detectors: st_fwd

      VI. Plan of the day/Outlook

        a) Access now, beam development during afternoon, collisions overnight

        b) Production configuration with final prescales, start with BBC, BTOW, production_pp500_2022

        c) Forward commissioning with low intensity beam, Xu, Prashanth, David, VPD and EPD needed before

        d) Magnet work tomorrow

        e) Scalers need to run

    12/13/21

      I. RHIC Schedule

        Polarization development and ramp development during the day, collisions with rebucketed beam late afternoon or overnight
        Access at IP2
        Low intensity now because of mistime abort in both rings at the power dip
        Cogging depends on snake availability, needed for correct longitudinal position of vertex
        Lossy blue injection

      II. Recap

        Collisions yesterday after 8pm, BBC HV scan, ended by power dip
        Next collisions 5 am, ZDC polarimetry with singles at 2 kHz, VPD HV scan, EPD timing scan

      III. Open issues

        a) VPD HV 13.01 didn’t turn on, at lower voltage (1627 V) now ok, might need to swap the channel

        b) Non-VPD trigger needed (BBC coincidence in L4) for VPD slewing correction, Jeff will make separate configuration file, instructions for SL by Daniel

        c) Separate configuration for local polarimetry (Jeff)

        d) EPD commissioning (Rosi)

        e) ZDC commissioning (Tomas, Zhangbu)

        f) Every trigger detector sends a message over star-ops when done with commissioning

        g) Cal scan (Oleg)

        h) MTD commissioning (Shuai), VPD trigger needed, instructions for SL by Shuai

        i) Spike in 1st gating grid time bin, seen as perpendicular planes in event display, should fix after new pedestal, open/close test after beam dump, IOC restart (David)

        j) TAC windows for BBC, EPD, ZDC, VPD in investigation (Eleanor, Jeff), monitoring to check the registers

        k) L4 was not present because of incorrect R and z vertex cuts, ok now

        l) Collision triggers in tune_22 for calibration and tune configuration

        m) Dead QT32B daughter card for EPD (daughter A in EQ3 slot 10), also cards in EQ1 and EQ2, access needed to replace (Christian), controlled access (SL), SL calls Rosi after done to check

        n) Local polarimetry in progress (Jinlong), not yet from scalers

        o) Eemc-pwrs1 NPS which has a network interface failure and affects access to eemc-spin needs to be replaced (not urgent)

        p) sTGC has no data in first run after LV power up, under investigation

        q) iTPC Sector 13, RB 3 masked out and powered off, keep like this

        r) FCS LV was overheating, rack backside was opened (Tim), 1 deg drop, not critical now

        s) No ETOF

      IV. Resolved issues

        a) BBC commissioning done for run 22, Akio not in call list for collisions

      V. Updates

        a) Call list for collisions, SL informs over star-ops

        b) File stream name for forward detectors: st_fwd

      VI. Plan of the day/Outlook

        a) Potential access

        b) Tune configuration with beam development, detectors in proper safe state

        c) Could get collisions later afternoon or overnight

    12/12/21

      I. RHIC Schedule

        Collisions later afternoon (4/5pm), likely 6 bunches rebucketed
        Magnet quenches were caused by temperature problem at 1010A, not beam induced
        Lossy blue injection, work needed on Y2A RF cavity
        Rebucketing successful yesterday with 6 bunches
        Scans and ramp development till 4pm, stores with collisions after
        
      II. Recap

        tune_22: trg + daq + btow + etow + esmd + fcs

      III. Open issues

        a) Global timing with collisions

        b) TAC windows for BBC, EPD, ZDC, VPD (Eleanor, Jeff), test with rebucketed collisions

        c) Dead QT32B daughter card for EPD (daughter A in EQ3 slot 10), access needed to replace (Chris)

        d) Local polarimetry (Jinlong)

        e) Eemc-pwrs1 NPS which has a network interface failure and affects access to eemc-spin needs to be replaced (not urgent)

        f) First run after LV power up sTGC has no data, under investigation

        g) iTPC Sector 13, RB 3 masked out and powered off, keep like this

        h) FCS LV was overheating, rack backside was opened (Tim), 1 deg drop, not critical now

        i) No ETOF

      IV. Resolved issues

        a) Phones were out yesterday night due to update, fixed early morning

      V. Updates

        a) New Readiness checklist, cosmics with 8+ hours without beam

      VI. Plan of the day/Outlook

        a) Tune configuration with beam development, detectors in proper safe state

        b) Could get collisions later afternoon or overnight, call list for shift leaders

    12/11/21

      I. RHIC Schedule

        Polarized scans and rebucketing tests till 8pm, then CeC until tomorrow morning

      II. Recap

        Collisions at 3am, 28 bunches, both snakes ramped, polarization 44% blue, 54% yellow, beam abort after 20 minutes
        Next collisions 8am, ended by blue quench near the snake (but not the snake)
        Cosmics, tune_22, trg + daq + btow + etow + esmd + fcs

      III. Open issues

        a) Global timing with collisions

        b) Phones out at STAR and MCR due to update to phone system, fake (?) magnet trip in west trim at the same time, now back (9am)

        c) Investigation in DSMs on TAC windows for BBC, EPD, ZDC, VPD (Eleanor, Jeff), affects triggers which use TAC, read from registers is different from write, access will be good to test the VME board-by-board (Jeff)

        d) sTGC gas pressure increased after yellow alarm (Prashanth)

        e) Timing for scaler board with beam (Chris), expect to be ok, needed for local polarimetry (Jinlong)

        f) Eemc-pwrs1 NPS which has a network interface failure and affects access to eemc-spin needs to be replaced (not urgent)

        g) First run after LV power up sTGC has no data, under investigation

        h) iTPC Sector 13, RB 3 masked out and powered off, keep like this

        i) FCS LV was overheating, rack backside was opened (Tim), 1 deg drop, not critical now

        j) No ETOF

      IV. Resolved issues

        a) L4 plots missing from Jevp, fixed (Jeff)

      V. Updates

        a) New Detector States, 12/10, sTGC for both HV & LV is OFF for PHYSICS and Vernier scan, FST HV is OFF for PHYSICS and vernier scan

        b) Output from individual ZDC towers tested (Tomas)

      VI. Plan of the day/Outlook

        a) Tune configuration with beam development, detectors in proper safe state

        b) No collisions overnight (CeC instead)

        c) Cosmics only if there will be 8+ hours without beam

    12/10/21

      I. RHIC Schedule

        Blue9 snake ramps today till 4pm, there was shorted diode against spikes from transient current

        Recommissioning after that if blue snake is available, or rebucketing and ramp development if not

        Stores with collisions during owl shift if ready by 10pm today

      II. Recap

        Collisions at 4am for short time, ended by multiple beam aborts, access ongoing now
        tune_pp500_2022 with collisions, tune_22 or cosmics, field on

      III. Open issues

        a) Jpsi*HTTP at 1 kHz without beam, hot/warm tower ETOW/BTOW, leave out until calorimeters commissioned

        b) Update in TAC min/max for ZDC, EPD, BBC (Jeff)

        c) BBC HV adjusted to lower values (initial), need to finish HV scan (Akio)

        d) FCS LV overheating, rack backside to be opened (Tim), 1 deg drop, not critical now

        e) iTPC Sector 13, RB 3 masked out and powered off, keep like this

        f) Timing for scaler board with beam (Chris)

        g) Mask from L0 to L1 for a trigger patch

        h) Eemc-pwrs1 NPS which has a network interface failure and affects access to eemc-spin needs to be replaced (not urgent)

        i) First run after LV power up sTGC has no data, under investigation

        j) No ETOF

      IV. Resolved issues

        a) Remote access to scalers for polarimetry on cdev for Jinlong, was related to 64bit/32bit change, ok now

        b) Fan tray for EEMC CANbus, crate #70 replaced (Tim), also reboot to main CANbus, gating grid restored

      V. Updates

        a) VPD voltages HV changed to 2013 values (Isaac)

      VI. Plan of the day/Outlook

        a) Schedule from rhic is largely uncertain, could get collisions in owl shift

        b) tune_pp500_2022 with collisions, tune_22 or cosmics, field on, safe state when beam development

        c) FST keep off until very nice beam, expert present for any operation (Xu)

        d) sTGC by expert only (Prashanth)

        e) Commissioning starts with collisions on, state of experimental setup now

    12/09/21

      I. RHIC Schedule

        Possible collision setup in upcoming owl shift, progress on collimator, kicker alignment and timing, vertical injection matching
        and yellow injection damper, safe state important for detectors during beam development.

        blue9 snake: beam induced quench without substantial beam loss, question on magnet training or real problem,
        access today for a p.s. related to the snake

        Today after p.s. access: beam development without blue snake

      II. Recap

        Cosmic runs with field on, tune_22 with beams, trg + daq + btow + etow + esmd + fcs

      III. Open issues

        a) EEMC CANbus fan failure, crate #70, few minutes access to replace the tray (Tim)

        b) sc5 reboot by mistake by DO when trying to reboot crate #70 caused by incomplete instructions

        c) FCS LV overheating, rack backside to be opened (Tim), ½ hour to observe temperatures

        d) Level for yellow alarm for sTGC pentane gas, done

        e) Online database not visible yesterday ~2pm → ~5pm, Dmitry was called

        f) sTGC HV IOC having multiple instances (red alarm), ok now

        g) EEMC and EQ2, MXQ, and BBQ in alarm handler (David, input from experts on what to unmask in alarm handler)

        h) iTPC Sector 13, RB 3 was asserting busy even masked out, was powered off, Tonko + Jeff will take a look

        i) BCE DSM2, new algorithm uploaded, in test yesterday, in trigger, L0 to be checked by Chris

        j) Instructions on recovery for BBC/ZDC/VPD HV system (LeCroy1440) communication after power failure, pwd to bermuda needed

        k) Remote access to scalers for polarimetry on cdev for Jinlong

        l) Add instructions to recover forward detectors after power dip (sTGC call experts), Oleg T will add inst for FSC,  FST call experts

        m) Eemc-pwrs1 NPS which has a network interface failure and affects access to eemc-spin needs to be replaced (not urgent)

        n) sTGC auto-recoveries (is a place-holder for final message), empty plots for a few runs → on hold for commissioning

        o) Disk that stores TPC sector 8 pedestals needs to be replaced by Wayne (not urgent)

        p) No ETOF

      IV. Resolved issues

        a) Scaler board replaced during access yesterday (Chris), SCLR48 in trigger since run 22342037

      V. Updates

        a) Update in sTGC HV and LV GUI (channel numbering), instructions are updated

        b) Magnet current limit alarm, Flemming + David for default limits

      VI. When collisions are delivered

        a) Commissioning plan

        b) Time scan for BEMC together with ETOW and ESMD

        c) ETOW and ESMD basic QA with collisions to test if its configuration is ok

      VII. Plan of the day/Outlook

        a) beam development with detectors in correct safe states, tune_22 or tune_pp500_2022, cosmics when possible

        b) p.s. access for blue snake, beam work till midnight, possible collisions setup during owl shift

        c) ETOF may be turned over to SC for a few weeks during the run

    12/08/21

      I. RHIC Schedule

        Test for blue9 snake ok (partial snake, ongoing work), beam work till 10:00, access 10:00 -> 12:00,
        then injection,  ramps and rebucketing till tomorrow (12/09 4pm)

      II. Recap

        Cosmic runs with field on, tune_22 with beams, trg + daq + btow + etow + esmd + fcs

      III. Open issues

        a) Restricted access 10am today, scaler board (Chris, finished), also for EPD (finished)

        b) Add instructions to recover forward detectors after power dip

        c) EEMC and EQ2, MXQ, and BBQ in alarm handler

        d) iTPC Sector 13, RB 3 was asserting busy even masked out, was powered off, Tonko + Jeff will take a look

        e) sTGC HV at 2900 V for now

        f) sTGC auto-recoveries (is a place-holder for final message), empty plots for a few runs → on hold for commissioning

        g) BCE DSM2, new algorithm uploaded, ready to test (during today), not in trigger now

        h) Eemc-pwrs1 NPS which has a network interface failure and affects access to eemc-spin needs to be replaced (not urgent)

        i) Disk that stores TPC sector 8 pedestals needs to be replaced by Wayne (not urgent)

        j) No ETOF

        k) Instructions on recovery for BBC/ZDC/VPD HV system (LeCroy1440) communication after power failure, pwd to bermuda needed

        l) Access to scalers for polarimetry on cdev for Jinlong

      IV. Resolved issues

        a) BTOW crate Id 8 failed configuration fixed, (disconnected 0x08 board 1 and put it back)

        b) Replaced the problematic DSM1 in BCE crate, hole in trigger patch 250-259 seems gone from btow_ht run, 22340037

      V. Updates

        a) Two screens for sc3 (VPD/BBC/ZDC HV)

        b) 30 new mtd plots to the JevpPlots

        c) evb01/evb07 added to the DAQ default

        d) New firmware in BE004 DSM2

        e) sTGC LV IOC to follow the procedure

        f) To power cycle a EEMC follow the operation guide, power off and on not enough, follow manual strictly

        g) TPC current calibration should be done once per day

        h) Magnet current limit alarm, Flemming + David for default limits, sampling frequency?

      VI. When collisions are delivered

        a) Global timing, tune_pp500_2022 trigger definition

        b) Time scan for BEMC together with ETOW and ESMD, files from DAQ by Tonko, min bias trigger, time interval and steps to be set

        c) ETOW and ESMD basic QA with collisions to test if its configuration is ok, first reference plots will be available with collisions

      VII. Plan of the day/Outlook

        a) beam work till tomorrow afternoon, cosmics when possible

        b) no collisions are expected till tomorrow 12/9 4pm at least

        c) exercise for BBC/VPD/ZDC lecroy recovery after power failure (David)

        d) ETOF may be turned over to SC for a few weeks during the run

    11/17/21 to 12/07/21 Zilong Chang

    11/16/21

    RHIC schedule: no new info: “2-3 weeks” start-up delay (incomplete cryo controls upgrade).

    11/15 Blue 4K cool-down, starting 1/2 (12-6),  11/29 <  for Yellow

    11/15: magnet polarity change RFF -> FF

    11/15: Calibration sets taken: Long Laser run,  polarity flip, long laser run + laser with resistors in the chain  

    Currently 1.5MOhm in the chain, How long?

    will learn from the analysis (Gene) of the data set on the short in TPC

    Magnet stable

    All detectors are included and currently running (except ETOF)

    gmt trigger is enabled

    Issues and resolved:

      MTD:  issue with LV.   RDO masked out (1 out of 2): running / Geary

      BTOW: configuration fail. Fix by resetting board / Oleg

    Plan for today

      new shift crew + period coordinator (Zilong)

      NO access 07am-12pm tomorrow (11/17) for access controls test
     
      cosmic with all available detectors with Forward FF

      run until Thursday morning with FFF

      Flip the polarity back to RFF the polarity on Thursday morning (combined with BBC installation, MTD work)

      let crew know the detector is not ready to be included

      laser / 4 hours (separate run)

      pedestal / shift

      TOF,MTD  noise run / day 

    11/15/21

     

    RHIC schedule: the same: “2-3 weeks” start-up delay (incomplete cryo controls upgrade).

    11/15 Blue 4K cool-down,  11/29 <  for Yellow

    Short term plan:

    11/15: Flipping magnet polarity to RFF -> FF  (BBC installation postponed)

    11/15: Long Laser run (done),  polarity flip (ongoing), long laser run + laser with resistors in the chain (to be done)

    11/18: evaluate the short in TPC with data taken with two field settings,  and decide on the need to open East pole-tip to fix the if necessary

    magnet stable

    a trip yesterday 4:30pm with “daily” power dip

    TPC GG issue resolved: with correctly reloaded value

    Issues, detectors not included:

      FST: Error with HV ramping / 7am

      STGC running, included but HV off

      MTD: too many recoveries.LV control / 3am

    Shift procedure

    FST, STGC under shifter control?

    pedestal after “warm up” time

    Plan for today

      cosmic with field on with all available detectors with RFF -> FF

      long Laser runs

      TPC, BTOW, ETOW, ESMD,TOF, FCS, sTGC, FST, MTD

      let crew know the detector is not ready to be included

      laser / 4 hours (separate run)

      pedestal / shift

      TOF,MTD  noise run / day


    11/13/21

    RHIC schedule: the same: “2-3 weeks” start-up delay (incomplete cryo controls upgrade).

    11/15 Blue 4K cool-down,  11/29 <  for Yellow

    Any beam activities with only Blue cold?

    Short term plan:

    11/12 - 11/15: continue with cosmic data taking at Reverse Full Field

    11/15 Monday morning: Magnet polarity flip, BBC (West) installation

    11/15 - cosmic (+laser) data taking at Forward Full Field.

    11/18: evaluate the short in TPC with data taken with two field settings,  and decide on the need to open East pole-tip to fix the if necessary

    magnet stable

    trip yesterday likely from power dip

    alarm: set value, range to reduce false alarm from fluctuation

    ETOW: cable fixed. DAQ error. trigger/hardware/DAQ issue?

    FST overheating module 3-11. not resolved. masked-out.  Still out of run?

    STGC: DAQ 0. Still out

    shift QA plots, online QA, event display: lagging

    laser run: separate

    Plan for today

      cosmic with field on with all available detectors

      TPC, BTOW, ETOW, ESMD,TOF, FCS, sTGC, FST, (MTD)

      let crew know the detector is not ready to be included

      laser / 4 hours (separate run)

      pedestal / shift

      TOF noise run / day

Reference plots and instructions for shift crew (Current, Official Version)

Attached are critical reference plots for every subsystem with description, instructions on what to look for in every plot and finally what action should be taken.

0 - Experts should do their best to select plots that are independent of the triggers we are running.
1 - Experts should limit their critical plots to a maximum of 4 plots for every subsystem.
2 - Experts should maintain their plots up to date and inform the period coordinator of any updates.
3 - Period coordinator should make sure that the printed hard copy matches the online copy.
4 - Shift crew should closely monitor all of the corresponding plots online.


------------------------------------------------------

Note:
Experts description for every plot should answer the following questions:

1- What the plot is showing?
2- What aberrations to look for in the plot?
3- What to do if there is a problem?


------------------------------------------------------

I'm going to copy the documents below as links here to group subsystems together:

BTOW critical plots (Run 22 and Run 23)

EEMC critical plots (06/08/23)

EPD


eTOF

FCS critical plots (Run 22 and Run 23)

FST (05/06/24)

sTGC (03/30/22)

TOF critical plots (08/23/2024)

MTD critical plots
(07/23/2024)

VPD critical plots
(01/08/20)


TPC plots and comments (09/18/2024)

Misc advise (01/05/22)

  • Do not only check plots with references, check the entire set of plots (l4 and shift). Even if you don't understand what a plot means you can observe holes in the plot or a plot not filling when it usually does.
  • Check the L0Trg plots (usually) on page 21. If the BBC is not configured the BBC TAC difference plot will be empty. That is: shift > Trigger > Trigger_Detector_Vertex > BBC TAC Difference
  • FST MPV plots only fill at the end of a run, so they'll look empty during it.
  • The GMT plot Shift > GMT > Timing > Sum is empty for some runs and not others. Nikolai said “QA plots can be sometimes empty because the occupancy / statistics for GMT is rather small one." It does not require contacting the expert.
  • Check the TPC drift velocity during laser runs. If it's empty the shift crew/leader likely forgot to check the laser in in run control. A laser which doesn't turn on should be obvious to the DO.
  • The statistics in the drift velocity plot are not very indicative of the number of events recorded. It shows a sampling that can vary run-to-run and will always have far fewer entries than events recorded.
  • Double peaks in the TPC drift velocity plot may not show anything serious. This comes about from the online code which is not the same code used offline. The expert probably can't do anything, but you can note it in the shift log.
  • A hot tower can dramatically change the scale of the BEMC eta phi plot, making almost all channels the same color. This is typical in fact, and not cause for alarm.

WAH Network Switch NPS details

This is NOT a comprehensive list of NPS units in the WAH, only those used with network switches

Note that in some cases the NPS units listed here also supply power to devices other than network switches.

Network Switch Location NPS Outlet Model telnet SSH1 HTTP User Accounts2
splat-s60.starp
(130.199.60.118)
SP 1C4 netpower1.starp
(130.199.60.252)
3 APC AP7900B   staradmin (wbetts)
trgexpert (wbetts, ?)
device (wbetts)
jml
tlusty
splat-s60-2.starp
(130.199.60.138)
SP 1C4 netpower2.starp
(130.199.60.253)
A1 WTI NPS-8     staradmin (wbetts (pw or SSH key))
akio
crawford
cperkins
jml
tlusty
east-s60.starp
(130.199.60.251)
east side rack under stairs eastracks-nps.trg
(172.16.128.226)
8 APC AP7901   3
apc (wbetts)
device (wbetts,?)
jml
tlusty
west-s60.starp
(130.199.60.174)
west side rack (EEMC stuff) westracks-nps.trg
(172.16.128.227)
1 APC AP7900   3 apc (wbetts)
device (wbetts)
jml
tlusty
nplat-s60.starp
(130.199.60.62)
NP, 1st floor north-nps1.starp4
(130.199.60.71)
1 APC AP7900B   staradmin (wbetts)
apc (wbetts)
jml
tlusty
east-trg-sw.trg
(172.16.128.223)
east side rack
under stairs
pxl-nps.starp
(130.199.61.2)
8 APC AP7901   STARpwradm (wbetts)
device (wbetts)
jml
tlusty
splat-trg2.trg
(172.16.128.224)
SP 1C4 netpower1.starp
(130.199.60.252)
1 APC AP7900B   staradmin (wbetts, ?)
trgexpert (wbetts, ?)
device (wbetts)
jml
tlusty
switch1.trg
(172.16.128.201)
SP 1C4 netpower1.starp
(130.199.60.252)
2 APC AP7900B   staradmin (wbetts, ?)
trgexpert (wbetts, ?)
device (wbetts)
jml
tlusty
switch2.trg
(172.16.128.202)
SP 1C4 eemc-pwrs1.starp
(130.199.60.23)
4 APC AP7901   apc (wbetts)
device (wbetts)
eemc (Will Jacobs and the shift crew?)
oleg (Oleg Eyser, outlet 8 only)
jml
tlusty
switchplat.scaler
(10.0.1.150)
SP 1C4 netpower2.starp
(130.199.60.253)
A2 WTI NPS-8     staradmin (wbetts (pw or old SSH key))
akio
crawford
cperkins
jml
tlusty
switchplat2.scaler
(10.0.1.149)
SP 1C4 netpower2.starp
(130.199.60.253)
A3 WTI NPS-8     staradmin (wbetts (pw or old SSH key))
akio
crawford
cperkins
jml
tlusty
switchplat3.scaler
(10.0.1.154)
SP 1C4 netpower1.starp
(130.199.60.252)
4 APC AP7900B   staradmin (wbetts)
trgexpert (wbetts, ?)
device (wbetts)
jml
tlusty


1 Establishing an SSH connection to some of these NPS units can take about 20 seconds.

2 In most cases, additional individual user accounts are possible (and advisable!)

3 only older weak encryption is available, use 'ssh -c 3des-cbc' to connect with an older cipher that is used by these NPS units

4 The North Platform NPS uses copper-to-fiber media convertors for its network connections.
Though the media convertors themselves are relatively unlikely to fail, it is possible to power cycle one of them on netpower2.starp, plug A4.
If one is unable to connect to north-nps1 to powercycle nplat-s60.starp, then one could try powercycling this media convertor as a last resort short of entry to the WAH for troubleshooting.


Additional Notes:

".starp" is short for .starp.bnl.gov (130.199.60.0/23)
".trg" is short for .trg.bnl.local (172.16.0.0/16)
"scaler" is short for .scaler.bnl.local (10.0.1.0/24)

In order to access an NPS or test if a given network switch is online (with ping for instance), one must first get to a system that has access to the same subnet as the NPS or switch in question. 
Most machines using a 130.199.60.0/23 address (aka "starp") will not have access to .trg or .scaler (and vice versa).
The trgscratch machine has network interfaces on all three networks, so is particularly useful in this regard.
And a final note - DNS resolution is not 100% shared across the three networks.  In particular, the scaler network has its own DNS servers which are not configured on all multi-homed hosts. The point being that using the numeric IP address may be necessary instead of the FQDN in some cases.

Restricted Access

Any path on the STAR Web sites (online or offline) having the word "restricted" would trigger a protection very much like the protected areas. An account and password will be asked. The access has been created to allow non-blinded (CME analysis) members of the collaboration to access restricted data. As a result, DO NOT share this access information with ANYONE and do not make the assumption of who is and is not blinded. Your colleagues should be able to access this page and this comment if they are non-blinded/allowed.

The information is below as a comment.

Account: restricted
Password: [see below]

The Drupal group "Restricted Access" is meant to provide a simpler access protection to restricted information. Simple associate document and pages to this group and ONLY this group (the group is private, only members can see the documents associated to this group).

STAR Images

Here are some images from the STAR detector and collaboration.

 

Pixel related images

End view of pixel detector mockup

 

Participants

 

Photos of visit to LBNL metrology lab

 

Zeiss machine 1

 

Zeiss machine 2

probe stands for Zeiss machine 

SketchUp STAR

SketchUp STAR

3D models of the STAR detector, the RHIC accelerator complex, a prototype of an event display for the STAR experiment in SketchUp

 

Tai Sakuma

 

I drew the STAR detector and the RHIC accelerator complex in SketchUp. Also, I built a prototype of an event display for the STAR experiment by using SketchUp Ruby API. This page shows image files generated in this project.

 

The STAR detector

icon
 

A cross sectional view of the STAR detector. This figure was used in Phys. Rev. D 86, 032006 (2012) (DOI: 10.1103/PhysRevD.86.032006 )

 
icon
 

The same 3D model as in the previous figure but in perspective. This figure has been often used in presentations at scientific meetings.

 

RHIC

icon
 

The RHIC accelerator complex

 

STAR detector subsystems

icon

The Beam-Beam Counters (BBC)

 
icon

The Barrel Electromagnetic Calorimeter (BEMC)

 
icon

A BEMC Module

 
icon

The geometry of the calorimeter towers in a BEMC module

 
icon

The Time Projection Chamber (TPC)

 
icon

The solenoidal magnet subsystem

 

Jet η and detector η

icon

Jet η and detector η

 

Jets at three levels

The following three figures illustrate jets defined at the three different levels. These figures have been used several times in scientific meetings, e.g., the slide 4 of a presentation at APS 2010 .

icon

A high-pT back-to-back dijet event at the detector level. The solid trajectories indicate TPC track measurements while the lego blocks indicate energy deposited in the BEMC towers.

 
icon

A high-pT back-to-back dijet event at the hadron level

 
icon
 

A high-pT back-to-back dijet event at the parton level

 

Jet Patch Trigger

icon

The locations of the twelve jet patches of the BJP1 trigger in Run 6. The size of a jet patch is 1.0x1.0 in the η-φ coordinate system. Each jet patch contains 400 BEMC towers. This figure was used in Phys. Rev. D 86, 032006 (2012) (DOI: 10.1103/PhysRevD.86.032006 )

 

Event Display

I developed a prototype of an event display for the STAR experiment using the SketchUp Ruby API. This prototype was used in jet analyses, in particular, in establishing a jet definition and determining background events.

icon
icon

Two snapshots showing a prototype of an event display for the STAR experiment

 

STAR Management Document

This tree contains STAR Management related documents.

Information for International Visitors

 Information for International Visitors


On this page you can find some guidelines and a few pointers that may help facilitate acquiring the proper documents in order to visit BNL. Please be aware that current lead times can be long to very long and you should plan your trips, whether for collaboration meetings or to take shift, accordingly.

Disclaimer: While an effort will be made to keep this information up to date, it is provided as-is. The responsibility to start the process timely, and have the required documents all available will be entirely with the application. Advise from professional legal council should always supersede any suggestions made here.


Official information from the GUV Center: More information from other sources:
What to expect when planning to visit BNL for a STAR meeting or taking shifts:
  • stay tuned ... we will update this with more specific information & feedback from other STAR international visitors.

Management Team 2014--2017

 

BES Milestones

 

 
 
Dear RHIC Spokespersons:
 
I am writing again to update my request for the beam use requests for the coming RHIC runs.
 
In consultation with the Office of Nuclear Physics we have decided to make a change to the future RHIC run schedule. Specifically, we now plan to run RHIC in both FY16 and FY17, followed by one year (FY18) without a RHIC run during which the low energy RHIC electron cooling (LEReC) system will be installed. The high statistics Beam Energy Scan II is then planned for the years FY19 and FY20. The modified plan will allow for a less aggressive schedule of the LEReC project. It will also relax the conflict between the heavy ion and spin physics programs of RHIC that remained unresolved at last year’s PAC meeting.
 
I request that you submit the annual beam use requests by May 19, 2015. The BURs should be for a 22-week RHIC run in FY16, and either a 15-week or a 22-week RHIC run in FY17.
 
I also ask the STAR Collaboration to present the iTPC proposal to the PAC and to provide an update on experimental efforts aimed at exploring possible phenomenological manifestations of the chiral magnetic effect.
 
Both collaborations should present a tentative schedule for the release of results from the data taken in runs 13 and 14 (p+p, Au+Au 15, Au+Au 200, 3He+Au).
 
Thanks, Berndt

From: Mueller, Berndt 
Sent: Thursday, October 23, 2014 11:53 AM
To: Xu, Zhangbu; Morrison, David
Cc: James Nagle; Dunlop, James C; Karsch, Frithjof
Subject: performance milestones
 
Dear RHIC Collaboration spokespersons & Frithjof:
 
I received a "heads up" from Jim Sowinski that a main scientific performance milestone for nuclear physics in FY15 will be the completion of the analysis of the 15 GeV Au+Au data from Run 14 and the connection of the BES data with lattice QCD calculations. Here is the statement in more detail (Jim's words in blue):
 
“FY15:  Measure bulk properties, particle spectra, correlations and fluctuations in gold + gold collisions at Relativistic Heavy Ion Collider (RHIC) to search for evidence of a critical point in the Quantum Chromodynamics (QCD) matter phase diagram.”
 
We recognize that significant progress has been made in finalizing and collecting the results from Beam Energy Scan I to make the case for BES II.   In evaluating this goal we will primarily be looking at producing results from the 15 GeV run carried out last year.   Other efforts that might contribute towards this measure could be results from the fixed target data taken last year and theoretical efforts such as progress towards better connecting measurements to QGP properties or extending LQCD calculations toward finite chemical potential.
 
It will be important to have tangible results to point to by the end of September 2015. To me this looks eminently doable and desirable in view of the Quark Matter 2015 conference in Kobe.
 
Let me know if you have any questions.
 

Organization

Organization Information

The following information is available about the STAR organization. Click on any of the following topics.

Policies

 

Statement of Support for Continued STAR Running post BES-II

Official letter from BNL Lab. management, Nov 2017


STAR Beamtime Requests

STAR Council information


Archived Collaboration Lists
The STAR Collaboration list of members
MEMO on Collaboration Institutional bylines (10/22/2015)

STAR Shift Plan
The STAR Collaboration shift policy (posted 11 July 2001)

 

2001 original bylaws

Revised Sections II and III 28 November 2001

 

I. Collaboration Membership

  1. Members

    Scientists, graduate students, engineers and other technically-oriented persons intending to make a significant contribution to this collaboration over a period of a least one year are eligible to be members. Members of other RHIC collaborations will not be members of this collaboration without permission from the Council.

     

  2. Institutions

    Institutions (universities or laboratories) are admitted to the Collaboration by the Council upon recommendation of the Spokesperson. Normally an institution will have at least two Ph.D.s who are eligible to be members. Each institution is responsible for its list of members and for designating a contact person who will receive and disseminate collaboration information within that institution.

     

II. Council

  1. Membership

    The Council will consist of one member from each collaborating institution who will represent the membership from that institution. Institutions are admitted to the Collaboration by the Council upon recommendation of the Spokesperson and upon receiving an affirmative vote of at least 75% of the Council. Each institution will select its representative. Institutions may also be admitted to the Collaboration without Council membership. Such an institution should be affiliated with another institution which has Council membership and may be given its own Council membership at a later time by a vote of the Council. The Spokesperson and Deputy Spokespersons will be ex-officio members of the Council. The Council may add extra members.

    Younger members of the collaboration will be represented by up to three at-large Council members with no more than five years experience beyond the Ph.D., at least one of whom should be a graduate student. Candidates for at-large positions can be either self-nominated or nominated by others after a call for nominations by the Spokesperson. The at-large representatives will be elected by those members of the collaboration with no more than five years experience beyond the Ph.D. Elections should take place at least every two years.

     

  2. Function

    The Council will deal with general issues which concern the collaboration. Examples include the organization and governance of the Collaboration, adoption of the bylaws and amendments thereto, the policy on admission of new members and institutions to the Collaboration, and publication policy. In addition, the Council is responsible for election of the Spokesperson and for approval of Deputy Spokesperson appointments.

     

  3. Governance and Meetings

    The chairperson of the Council is elected by majority vote of the Council from among its ranks. The term of office is two years, and is not renewable. At the end of the Chairperson's first year in office, a deputy is elected who will become the chairperson's successor. After the end of the chairperson's term of office, the chairperson will serve yet another year as deputy.

    Normally, Council meetings will be held during collaboration meetings. At the discretion of the Council Chair, additional meetings can be called with at least two weeks notice. Any Council member unable to attend a given meeting can name a substitute representing the same institution or group. All Council meetings will include a closed session, without the Spokesperson and Deputy Spokesperson(s) present. Elections by the Council, of the Spokesperson or Council Chair or Deputy Chair, must be announced publicly to the entire collaboration at least two weeks in advance of the meeting at which the elections will take place.

     

  4. Voting

    The Council will make decisions on all issues except elections and admission of new institutions based on a simple majority of all voting members. Any member may call for a secret ballot. Votes on adding extra members, admitting new institutions, or amending these bylaws must be announced ahead of time. Proxies to the Council chairperson are allowed for non-election votes, as are electronic meetings and ballots.

    Voting for Spokesperson will be carried out by secret ballot, and without proxies, at publicly announced Council meetings. A quorum of 60% of Council members must be represented at the meeting for a valid vote. A candidate for Spokesperson can be elected in the first round of voting only if he or she receives more than 50% of all votes cast, including abstentions. If no candidate obtains this absolute majority, runoff rounds will be conducted for the top vote-getters until one candidate obtains more than 50% of the votes cast, not counting abstentions. The re-election of a sitting Spokesperson requires a 2/3 majority of the voting representatives present at the meeting.

    Candidates for Spokesperson can be nominated by any member of the collaboration. An ad hoc nomination committee consisting of the Council Chair and two Council members appointed by the Chair will solicit nominations at least 30 days before the election, and will prepare and oversee the ballots.

     

III. Spokesperson

  1. Selection

    The Spokesperson is elected by the Council and serves at its discretion. The normal term of office is 3 years, renewable with a 2/3 majority of the voting Council representatives present at the meeting.

    The Council can decide to terminate the Spokesperson's term in office at any time and to request new elections. Such a termination requires a 3/4 vote of the representatives present at a Council meeting with a quorum of at least 2/3 of the members represented.

     

  2. Function of the Spokesperson

    The Spokesperson will be the representative of the Collaboration in scientific, technical, and managerial concerns. The Spokesperson will be concerned not only with the design and construction of the detector and its upgrades, but also with all other preparations required to operate the experiment efficiently and to publish results in a timely and responsible fashion, and with continued leadership of the Collaboration.

     

  3. Deputy Spokespersons

    There will be up to two deputy spokespersons. The spokesperson and the deputies must all belong to different institutions. The deputy spokespersons will be appointed by the Spokesperson subject to the approval of the Council. The deputy spokespersons provide support and assistance to the Spokesperson in the management of the experiment and collaboration. They may be appointed by the Spokesperson to act on his or her behalf and to represent the Collaboration.

     

IV. Adoption

These bylaws were adopted by the Council on 21 June 1991 and amended on 16 Nov. 1991, 21 April 1993, 7 Nov. 1995, and 28 Nov. 2001.

(Adapted from the SDC)

Publication Policies

Quick reference

 

Members

Affliation Representative E-mail Phone #
University of California, Los Angeles Huan Huang huang@physics.ucla.edu 310-825-9297
Valparaiso University Don Koetke, Chairperson donald.koetke@valpo.edu 219-464-5377
Yale University Jack Sandweiss sandweiss@hepmail.physics.yale.edu 203-432-3358
Brookhaven National Laboratory Jamie Dunlop dunlop@bnl.gov 631-344-7781
       

 

Charter

August 14, 1994

Goals. The publication goals of the STAR Collaboration are as follows:

  1. The speedy publication of results that the Collaboration has agreed are sound and ready for publication. Here "publication" means the announcement of results through seminars, conferences, and/or scientific journals.
  2. The avoidance of rumors and premature publication. 
  3. The equitable assignment of credit to individuals for their work.

     

Policies.

The following policies and practices have been adopted to further the above listed goals:

 

  1. Data from all parts of the STAR detector shall be available to all members of the STAR Collaboration for analysis. Furthermore, it is the responsibility of all collaborators to see that the necessary correction algorithms and correction parameters be updated and made available to the entire Collaboration in a timely manner. Any member of the Collaboration is free to analyze any part of the data.
  2. Given the overriding principle of open communication within the collaboration, collaborators should, upon request of a colleague, provide relevant code and input assumptions that would permit results to be checked and confirmed.
     
  3. Members of the Collaboration should exercise caution and good judgment when discussing experimental results with individuals outside the Collaboration before the results have been published (i.e., presented publicly in a seminar or conference or submitted for publication in a scientific journal). These results should clearly be labeled "Preliminary". When individuals outside the Collaboration are consulted for advice on the analysis or interpretation of the data, those individuals should be asked to respect the confidentiality of the data.
  4. Members of the Collaboration should not present unpublished experimental results in seminars or conferences until (a) a memo has been circulated to the Collaboration giving details of the analysis and results, and (b) a rehearsal has been held in front of several members of the Collaboration, including a Council member and, if possible, the Spokesperson. In particular circumstances the Spokesperson may waive both these requirements.

    The Spokesperson will be designated as the Talks Coordinator. He will strive to distribute speaking invitations received by the Collaboration on the basis of the suitability of the speaker to the topic and an equitable distribution of talks to individuals and institutions with appropriate consideration of regional distinctions. As an aid to this end, he will maintain a record of all talks; thus members of the Collaboration who receive personal invitations to give talks should inform the Talks Coordinator. Speakers are encouraged to deposit a copy of their transparencies in a file which will be maintained by one of the secretaries.

  5. A paper should not be submitted to a scientific journal until the above requirements have been met and (c) a not-far-from-final draft of the paper has been discussed at a general meeting, and (d) the final draft has been circulated. After the discussion of the not-far-from-final draft, an ad hoc committee may be appointed by the Spokesperson to facilitate the preparation and submission of the final draft. This committee would consist of the principal authors and one or more individuals not directly working on the analysis. It would be empowered to adjudicate disagreements on details of the paper. In addition, the Spokesperson will maintain the list of publications.
  6. The Spokesperson will maintain the Collaboration's current author list. In general, the authors of papers will be those collaborators who have contributed to the taking or analysis of the data reported in the paper. In recognition of their contributions, collaborators who have worked on the installation of new equipment but have left the Collaboration prior to the taking of data will be included on the initial papers of the Collaboration using that equipment. Normally individuals are dropped from the general author list one year after they leave the Collaboration; however, individuals who have made special contributions to a given topic, either in hardware or analysis, should be retained on the author list of papers for which their contributions are relevant. Technical papers need only list as authors those individuals who contributed to that project, but the "STAR Collaboration" should be acknowledged. Decisions on the inclusion of authors will be done on the basis of these guidelines by the Council member from the author's institution with the concurrence of the Spokesperson.

    The authors of papers will be listed in alphabetical order, preceded or followed by the phrase "STAR Collaboration". Papers which result from student's theses should be so indicated by appropriate footnotes. Papers for conference proceedings are normally submitted in the speaker's name, plus other major contributors if appropriate, plus "STAR Collaboration", and require only the Spokesperson's agreement on the wording of the text.

  7. The Spokesperson will maintain a list of students' thesis topics which will be updated at collaboration meetings. The Council may aid in resolving any conflicts.
  8. Members of the Collaboration should not issue press releases or call press conferences without the approval of the Spokesperson, who will consult with the members of the Council.

 

Physics Opportunities with STAR in 2020+

 

STAR 2016 BUR for run 17

PAC recommendation 2016

Weekly phone meeting: 
Monday 3-4PM (EDT) 
https://bluejeans.com/333205157

April 4, meeting agenda: 
Drupal Calendar

latest version of current BUR:
Version 12, 05/21/2016

Last-year's BUR and CAD projections

Lambda/Lambdabar polariation at 27GeV

Subsequent meeting with Berndt after the NSAC meeting in March 23 about 
Run17: Berndt said that after discussion with DOE and BNL management,
it is likely that 19 cryo-weeks of RHIC operation should be taken as baseline,
this should include two weeks of CeC commissioning.
That means effectively 17 cryo-weeks for run 17. 

Executive Summary: 

The STAR Collaboration makes the following two-year beam-use proposal, in order to achieve its spin and relativistic heavy ion physics goals on a timescale consistent with intense international interest and competition in these areas, as well as to utilize RHIC beams effectively, taking full advantage of recent improvements in machine and detector capability.

 

 

Run

Energy

Duration

System

Goals

priority

Sequence

17

ÖsNN=510 GeV

 

 

 

13-wk

 

 

 1-wk

 

 2-wk

Transverse
  p+p

 

  p+p

 

 CeC

AN of W±, g, Drell-Yan,
L=360 pb-1, 55% pol

 

RHICf

1

 

 

 

 

1

 

 

2

18

 ÖsNN =200 GeV

 

ÖsNN=200 GeV

 

ÖsNN=27 GeV

  3.5-wk

 

  3.5-wk

 

 2-wk

Ru+Ru

 

Zr+Zr

 

Au+Au

1.2billion MB

 

1.2billion MB

 

500M MB

2

 

      2

 

      3

4

 

3

 

5

Table  STYLEREF 1 \s 1 SEQ Table \* ARABIC \s 1 1: Scenario I: 19 cryoweeks for Run 17 and 13 cryo-weeks for run 18. We assume most of the TPC bandwidth (1800Hz) is for the minbias event rate for isobar data-taking with 90 hours per week of DAQ time. All luminosities requested are actual sampled luminosity. See text for details of HFT and pp510 run conditions and optimizations.

 

STAR’s highest scientific priority is the first significant measurement of the sign change of the Sivers’ function, when compared to the value measured in SIDIS, and evolution effects in transverse momentum distributions through measurements of single spin asymmetries in W+/-, Z, direct photon and Drell-Yan production in transversely polarized √s = 500 GeV p+p collisions. The sign change measurement is a fundamental test of QCD and is being pursued by other experiments, making a timely measurement imperative.

 

STAR’s second scientific priority is to clarify the interpretation of the observed signatures of the chiral magnetic effect, chiral magnetic wave and chiral vortical effect by making measurements that disentangle signal from background v2 effects.  We request a 3.5 week run each for Ruthenium-96 (Ru+Ru) and Zirconium-96 (Zr+Zr) collisions in run 18.  This choice of nuclei is ideal as it allows for a variation in magnetic field at a maximum of 10%, while keeping nearly all other parameters the same. This decisive measurement of the ratio of charge separation in the isobar reduces the flow background dramatically, and will greatly advance our understanding of the chiral magnetic effect, which has fundamental impact beyond the field of high-temperature QCD.

 

 

Run

Energy

Duration

System

Goals

priority

Sequence

17

ÖsNN=510 GeV

 

 

 

 

 

 

ÖsNN=62.4 GeV

13-wk

 

 

 1-wk

 

 2-wk

 

 4-wk

Transverse
  p+p

 

  p+p

 

 CeC

 

Au+Au

AN of W±, g, Drell-Yan,
L=360 pb-1, 55% pol

 

RHICf

 

 

 

Jets, dileptons, NPE

1.5B MB

1

 

 

4

 

 

 

     5

1

 

 

2

 

 

 

3

18

 ÖsNN =200 GeV

 

ÖsNN=200 GeV

 

ÖsNN=27 GeV

  3.5-wk

 

  3.5-wk

 

 2-wk

Ru+Ru

 

Zr+Zr

 

Au+Au

1.2billion MB

 

1.2billion MB

 

500M MB

2

 

      2

 

      5

5

 

4

 

6

Table  STYLEREF 1 \s 1 SEQ Table \* ARABIC \s 1 2: Scenario 2: 24 cryoweeks for Run 17 and 13 cryo-weeks for run18

 

 

Our next scientific priority is to take data at beam energies that are lower than the nominal energies, but are not part of STAR’s proposed Beam-Energy-Scan Phase-II program. There are two programs in our requests in this category:

a)     Au+Au collisions at 62 GeV for measurements of inclusive jets and charm spectrum at low energy. With newly developed analysis techniques, inclusive jet measurements are possible with minimum-bias data in Au+Au collisions. We have recently extended the measurement to Au+Au at 62 GeV with limited statistics from run 10. We request 4 weeks of Au+Au collisions at this energy for an inclusive jet RCP measurement. This will also provide a measurement of Non-photonic electrons from charm semileptonic decays. In addition, with the significant reduction of material from run 16 to run 17 and run 18, dilepton measurements with good statistics are possible in 4 weeks of data-taking.

b)     STAR has observed an exciting new effect, the global polarization of hyperons in noncentral Au+Au collisions. Theorists are excited and BES-II (2019-2020) will provide opportunities for higher precision explorations of the effect. There is also a possibility of observing a difference in Lambda and anti-Lambda polarization if the global polarization effect connected to the Chiral magnetic effect. We propose to extract important information from this effect at beam energy above the nominal BES-II energies through a high-statistics dataset at 27 GeV. It will be even better if we are able to collect this dataset in run17 with sufficient beam time, which allows us to analyze the effect with high statistics and guide our future direction in this subject ahead of the BES-II program.  

 

The STAR Collaboration presents in this BUR five compelling and prioritized scientific programs for the 2017 and 2018 RHIC runs, prior to the start of BES-II [[i]]. In this BUR we furthermore discuss the highlights from the scientific publications, on-going analyses and detector performance from recent runs. We also outline the planned upgrades in the next few years in preparation for Run 17 and the Beam Energy Scan II.


[i] STAR Beam Energy Scan II: Studying the Phase Diagram of QCD Matter at RHIC

   https://drupal.star.bnl.gov/STAR/starnotes/public/sn0598

Highlights from the recent results

 
1) Spin publications and analyses from run15 

2) HFT preliminary results and other open charm results (Xin Dong and Zhenyu et al.) 

3) MTD preliminary and other quarkonium results (Lijuan et al.))

4) chiral magnetic effect (Paul and Gang) 

5) Lambda Global Polarization (Mike) 

Anything else? 

HFT preliminary results and projection

2016/05/10 v3: text updated 
2016/05/07 v2: edited section 2 "Run16 Performance and Projection" .
2016/05/04 v1: edited section 1.1 "Progress on physics analyses with the HFT", put section 2 "Run16 Performance and Projection" as a space holder for now. 
 

MTD section 2.5: Progress on results and status with the MTD

on May 24, add one para in Section 4.8 for MTD in run17.

Update on May 16th:
seperate run 16 performance and projection from section 2.5.

on May 6.
1)  include recent results and status in the current write up.
2) for another section 3.1 under Run 16 performance report: The HFT and MTD data performance and projections,
there is not much to say for the MTD. Information in terms of sampled luminosity is included in Section 2.5.

PAC talks

 PAC talks: 

1) run14/15 experimental results and status (Frank) 

2) BUR (Zhangbu)

3) BES-II and iTPC upgrade (Helen+Flemming)

It will be good if we can have a preview of the RHICf (Sako, Itaru), Spin (Carl) and CME (Paul) talks.

computing and calibration

 Producing the requested data for physics will involve significant use of data stores, considerable processing time, and time spent understanding and completing calibrations. When a dataset is a continuation of a collision species and energy from a previous year (with STAR's detector similarly set up), first-physics calibrations for the highest priority dataset typically require on the order of two months after the conclusion of data-taking for that year. Subsequent dataset calibrations for a given year need another month each. The proposed 500 GeV p+p and 27 GeV (and potentially 62 GeV) Au+Au datasets will be such continuations, adding to what were acquired in 2013 and 2011 (2010) respectively (repeating the environment of no HFT and no iTPC). Understanding features of new running conditions could extend calibrations of the isotope datasets to take a few months more, and it is important to keep in mind that unforeseen peculiarities of any given data set can further delay delivery. 


Table X presents estimates of the DAQ and data summary ("MuDst") dataset sizes of the proposed colliding species, along with projected single-pass production times on 100% of STAR's 2016 allocation of the RACF computing farm. It is critical to emphasize that these numbers are tied to the proposed event goals, and would scale with the actual events acquired. These productions will need to balance computing resource usage with prior-year datasets as well as ongoing calibrations and run support. STAR may choose, as an imaginable example, to produce Run 16 200 GeV Au+Au concurrently with the Run 17 500 GeV p+p in a 60%-40% apportionment, which would elongate the latter production to a year or more.

For the 500 GeV pp dataset, we expect 360 pb^-1 to require approximately 3.3 billion events recorded for processing. 

Data set                        Events  DAQ size [PB]   MuDst size [PB] Production time [months]
500 GeV p+p             3.3B            3.20                    1.75                            5.0
62 GeV Au+Au    1.5B            0.81                    0.54                            1.0
27 GeV Au+Au    0.5B            0.24                    0.12                            0.5
200 GeV Ru+Ru   1.2B            0.88                    0.65                            1.0
200 GeV Zr+Zr   1.2B            0.88                    0.65                            1.0
Totals                                  6.01                    3.71

Caption: Table X: Computing resource estimates for production of the proposed Runs 17 and 18 datasets (see text for details)

 

run 17 pp510 BUR

 
similar to last BUR with updates from Spin Plan and also detector 

section structure same as last BUR

run17 (24 cryo-weeks) Au+Au 62 GeV

 
proposal and Projection for 62GeV (Helen et al.)

run18 (maybe 17 or later) 27GeV for Lambda/Lambdabar Global Polarization

 proposal and projection for 2 weeks of 27GeV Au+Au 

run18 isobar (3+3 weeks)

 
proposal and projects for Isobar (Zr+Zr and Ru+Ru) 
3+3 weeks

Discuss possible Isobar BES? 

The isobar (CME search) part of the BUR has been prepared by Gang, Paul and Sergei:
http://www.star.bnl.gov/protected/bulkcorr/rexwg/service/BUR1718/CME.pdf

The tex and eps files are in the same directory,
while the pdf file is also attached below.

STAR run 15-16 BUR

Final PAC Report in 2014:

https://drupal.star.bnl.gov/STAR/system/files/PAC_report_2014.pdf (07/07/2014)

new RHIC luminosity projection: 
https://drupal.star.bnl.gov/STAR/system/files/2014-0921%20RHIC%20collider%20projections.pdf 

Draft version:

Final version:
STAR_BUR_Run1516_v18.pdf(06/02/2014) 

draft talk:
STAR_BUR_PAC_2014_xzb_final.pptx
(06/09/2014) 
STAR_BUR_PAC_2014_xzb_v0.pptx
(06/06/2014) 

BES II draft talk at PAC2014
 

STAR_BUR_Run1516.v9.pdf (05/28/2014)
 

STAR_BUR_Run1516.v8.pdf (05/27/2014)
 

STAR_BUR_Run1516.v6.docx(05/25/2014) 

STAR_BUR_Run1516.v1.docx (04/23/2014)

<<<<<<<<<<<

The next Nuclear and Particle Physics Program Advisory Committee meeting will be held at Brookhaven 11-13 June 2014. 
The deadline for the submission of written material to be considered at this meeting is 3 June 2014.
PAC Meeting agenda, Jun 11--13, 2014

previous BUR and presentations at last PAC meeting

PAC meeting agenda, 2013
 

 

Committee meeting:
SeeVogh ID: 100 4272 
Every Thursday effective 04/03/2014 from 3-4PM

Title:                     run15-16BUR
Description:       
Community:        STAR
 Meeting type: Open Meeting (Round Table)
 
 Meeting Access Information:
 SeeVoghRN Application       http://research.seevogh.com/joinSRN?meeting=MtM8Ma2B2eDlDa9l9IDs9t
 WebStart (Java)             http://research.seevogh.com/join?meeting=MtM8Ma2B2eDlDa9l9IDs9t
 Mobile App : Meeting ID: 100 4274  
or Link: http://research.seevogh.com/join?meeting=MtM8Ma2B2eDlDa9l9IDs9t
 
 - Phone Bridge
                ID: 100 4274

Template for the word document style:
http://www.star.bnl.gov/~eca/BUR1516/STAR_BUR_Run1516.v0.docx 

<<<<<<<<<<<<<<<<

Charge to the committee:

The committee is requested to provide a document on Beam Use Request for RHIC run 15 and run 16 with the planned guideline of 22 cryo-weeks for both runs. In last BUR, STAR presented a request for run15 with 5 weeks of p+Au and 12 weeks of p+p physics runs, and a multi-year three-stage run plan for heavy-flavor program. Progresses in scientific results, detector developments and installations relevant to the programs in BUR should be documented. An initial draft of BUR should be available in May 10 for collaboration review, and a final version should be submitted to PAC in June 3.

The committee consists of: Elke Aschenauer, Ernst Sichtermann, Huan Huang, Helen Caines, Dan Cebra,

Jerome Lauret, Flemming Videbaek, Xin Dong, Daniel Kikola, Lijuan Ruan, Frank Geurts,

Zhangbu Xu (chair)

-----Original Message-----

charge from ALD

From: Mueller, Berndt

Sent: Monday, March 10, 2014 6:39 PM

To: James Nagle; Morrison, David; Xu, Zhangbu; Nu Xu

Cc: Yamin, Peter; Lissauer, David; Dunlop, James C; Roser, Thomas

Subject: 2014 RHIC Beam Use Requests

Dear RHIC Collaborations:

In view of the proximity of Quark Matter 2014 to the PAC meeting on June 11-13, the final version of the annual beam use proposals from the RHIC collaborations will be due on June 3, 2014. In view of the closeness of this date to the PAC meeting, and to leave the PAC members sufficient time to study the beam use requests, I urge you to not exceed this deadline. In order to permit Laboratory feedback before the proposals are finalized, I request that you send me a draft version of the beam use proposal no later than May 12, before everyone leaves for Quark Matter.

The proposals should describe and justify which beam operations you would like to see during the 2015 and 2016 runs, which are currently planned as 22 week runs. As usual, the beam use proposal should also give a brief review of recent published results and an early assessment of the success of Run-14.

Please send the proposals in electronic form to Peter Yamin, with copies to me, David Lissauer, Jamie Dunlop, and Thomas Roser.

Thanks in advance for your cooperation in this important matter.

Best regards

Berndt

Executive Summary

 

With two new major upgrades, the Heavy-flavor Tracker (HFT) and the Muon Telescope Detector (MTD), STAR Collaboration has positioned us well for leading the field in several major scientific programs in the next few years. We have maintained a similar pace in our scientific productivity and training of the next generation of young scientists for the last 15 years. To continue our excellent track record on science, we, the members of the STAR Collaboration, have recently produced a road map for our future in the form of our decadal plan, eSTAR Letter of Intent, Beam Energy Scan Phase II Whitepaper and are in the process of producing pp/pA document. This is a critical time for RHIC as a facility and STAR as a collaboration. STAR Collaboration is fully committed to our priorities based on the scientific pillars of studying QGP properties, quantifying nucleon spin structure, searching for critical point in QCD phase diagram and exploring the high-density gluon field in nuclei. A sustainable strong science program in STAR calls for realization of the proposed upgrades and substantial new initiatives, and require continuous effort and involvement from each and every collaborator. The STAR Collaboration proposes the following two-year beam-use request for RHIC run periods in year of 2015 and 2016, in order to achieve its near-term spin and relativistic heavy ion physics goals. A detailed breakdown of the proposed run periods is shown in Table.1.1.

Run 15:

 
The on-going run14 with Au+Au collisions at 200 GeV has been producing the first dataset with complete MTD and PIXEL/IST subdetectors of HFT. We have proposed a few incremental upgrades to the detectors, which are crucial components for a successful run15 scientific program. The necessary upgrades to the detectors are: refurbishing Forward Meson Spectrometer (FMS), a scintillator-based pre-shower detector in front of FMS, and Roman Pot Phase II*. These upgrades are anticipated to be ready for the run. The proposed run15 programs with p+p and p+Au collisions at Ös=200GeV provide crucial baseline measurements of charmed mesons and quarkonia. The same beam species will produce several important measurements to study the ridge phenomenon and onset of gluon saturation effect using the unique polarized p+Au collisions in a collider mode. 
 
A six-week run of p+p collisions at Ös=200GeV with longitudinal polarization will provide a data-set to further constrain the gluon polarization through inclusive jet and di-jet measurements at mid-rapidity, especially for large momentum fractions x, and constitutes the main spin physics objective for Run 15 longitudinal running. The non-zero gluon contributions to the spin structure of nucleon from double-longitudinal polarization measurement of dijet production have generated significant interests in further improving the measurements and its kinematic reach. The same run configuration also allows measures on ALL of p0 in the forward meson spectrometer to reach significantly lower x kinematics.
 
We propose a subsequent six-week run of p+p collisions at Ös=200GeV with spins transverse to their momentum direction. These p+p collisions exhibit kinematic and dynamical effects that are directly sensitive to quark transversity and partonic motion within the proton. In addition to improving the existing measurements of IFF and Collins analyses, a refurbished FMS with additional pre-shower in the front will provide clean direct photon measurements in the forward rapidity. This program is complemented by studies of polarized p+p elastic scattering and central exclusive production, in which a far-forward proton is detected intact. The relocation of the Roman Pots allows concurrent data-taking with nominal beam configuration, and enables new measurements on AN for exclusive J/Ψ production and inclusive diffractive production in p+p collisions.
 
A new program with five-week run with an integrated 300 nb-1 luminosity of ÖsNN = 200 GeV p+Au collisions with transversely polarized proton beam is proposed to follow the 12-weeks of p+p collisions. The program will address important physics, such as gluon saturation, cold nuclear effects on open heavy flavor and heavy quarkonia production, the ridge effect in pA, the Cronin effect and the strangeness enhancement in small-size systems. Utilizing RHIC’s unique capability of polarizing the proton projectile beam on heavy nuclei, the ratio of single spin asymmetries in π0 and direct photon production at forward rapidity between p+A and p+p collisions can be used to provide access to the elusive nuclear Weizsaecker-Williams (WW) gluon distribution function. The asymmetry for exclusive J/Ψ production in ultra-peripheral p+A collisions measurable with the upgraded Roman-Pot detectors in STAR will explore the generalized parton distribution function E for gluons.
 
Run16:
 
We propose a 10-week run of Au+Au collisions at at Ö`sNN = 200 GeV, integrating 10nb-1 of luminosity with rare triggers for Upsilon states, gamma-jet correlation, Bà J/y and J/y production, and 2 billion minbias events for Lc and differential study of charm flow and correlations.

A total integrated luminosity of 20nb-1 with the combination of run14 and run16 provides the necessary statistics for a measurement of three Upsilon states. We also request to collect 2 billion minimum-bias Au+Au collision events at Ö`sNN = 200 GeV in Run16. The effective figure of merit in terms of statistics for the signal increases by about a factor of 6 for low pT D0 in comparison to similar dataset taken in run14, t. This significant improvement will allow us to perform differential studies on the charmed hadron yields, flow and correlation in several centralities. More importantly, the high statistics and the improved pointing resolution for low momentum tracks will make the Lc measurement feasible (ct of Lc ~ 60 mm).

 
A 7-week run of transversely polarized p+p collisions at Ö`sNN = 510 GeV with integrated luminosity of 700 pb-1 is proposed for AN fo W± at mid-rapidity, g and exploratory DY measurements at forward rapidity. The possibility of measuring AN for DY, W+/-, Z0 Bosons and direct photons in one experiment would provide a unique world-class capability to test TMD evolution, access the Sivers function for sea quarks and test the prediction of non-universality for the Sivers function through three different processes to distinguish different underlying mechanisms.
 

section 1: Highlights from STAR Science Programs

section 2: run14 Performance Report

15 GeV Au+Au

200 GeV Au+Au

section 3: run15 BUR request on p+p and p+A collisions

Twelve weeks:

√s = 200 GeV polarized p+p collisions, corresponding to 90 pb-1

integrated luminosity and 60% polarization
√sNN = 200 GeV p↑+Au collisions to collect 300 nb-1.

Five weeks:

The run will be split in half between running with transverse and longitudinal polarization.

Update necessary information since last BUR

section 4: run16 Request

HFT request for Run15/16  (draft April 16, 2014)
http://www.star.bnl.gov/protected/heavy/dongx/presentation/BUR_15_16_HFT_04152014.pdf

Talk at DIS on transverse W/Z results and projection:
https://drupal.star.bnl.gov/STAR/system/files/Fazio_DIS2014_STAR_W_v2.pptx

section 5: Detector and Upgrades relevant to BUR


2014 R&D and Upgrades:
https://drupal.star.bnl.gov/STAR/future/star-rd-and-upgrades-2014

HFT (aasume we will have new TDC for vertex cut, and Aluminum cables for PIXEL)
MTD
HLT
Offline
FMS+preshower (run15)
Roman Pots
HCAL (run16)

STAR run16-17 BUR

PAC Recommendation report in 2016

PAC Recommendation report in 2015

Latest version (v18) of BUR for run16 and run17

2015 PAC meeting agenda: https://indico.bnl.gov/conferenceDisplay.py?confId=1138

Last year's BUR available at: 
https://drupal.star.bnl.gov/STAR/star-run-15-16-bur 

new CAD projection:
CAD Projection in 01/06/2017
CAD Projection in 04/19/2015

We don't have the official projections from CAD yet for this year, but last year's is available: 
CAD Luminosity Project (2014)
For other species not listed in the document, a rule-of-thumb is to assume that parton luminosity 
is the same: A+B luminosity is Lum_pp/(A*B). 

Weekly Meeting (Friday 2-3PM) Title:

STAR Run 16-17 BUR

To join or start the meeting in BlueJean, go to:

https://bluejeans.com/708164902/2570?g=ojtgc5dfnvuua4dbfz2ww6jomvshk===

Meeting ID: 708164902

Just want to dial in? (http://bluejeans.com/numbers)

1) +1 408 740 7256

    +1 888 240 2560 (US Toll Free)

    +1 408 317 9253 (Alternate Number)

<<<<<<<<<<<<charges <<<<<<<<<
Dear STAR Collaborators:

I am writing to announce the formation of the Run 16-17 Beam Use Request Committee. The committee is charged with providing a BUR document for RHIC run 16 and run 17 with the planned guideline of 22 cryo-weeks for run 16, and two possible options of 15 and 22 cryo-weeks for run17.

In the last BUR, STAR presented a request for run16 with 10 weeks of Au+Au at 200GeV and 7 weeks of p+p at 510GeV, envisioning a shut-down in run17 for preparation of BES II in run 18 and run 19. The current plan calls for an additional run17 with a BES II program in run 19 and run 20. Progresses in scientific results, detector developments and installations relevant to the programs in BUR should be documented. An initial draft of BUR should be available in May 10 for collaboration review, and a final version should be submitted to PAC in May 19.

The committee members are: Elke Aschenauer, Helen Caines, Dan Cebra, Xin Dong, Jim Drachenberg, Renee Fatemi (co-chair), Frank Geurts, Daniel Kikola, Flemming Videbaek, Lijuan Ruan, Alex Schmah, Ernst Sichtermann, Fuqiang Wang, Gang Wang, Nu Xu, Zhangbu Xu (co-chair).  

Progress and updates will be available at:
https://drupal.star.bnl.gov/STAR/public/smd/star-bur-run16-17. 

Thanks!
Zhangbu Xu

-----Original Message-----

From: Mueller, Berndt

Sent: Tuesday, March 31, 2015 9:58 PM

To: James Nagle; Morrison, David; Xu, Zhangbu; Ernst Sichtermann

Cc: Roser, Thomas; Fischer, Wolfram; Dunlop, James C

Subject: BURs for Run 16 & 17

 

Dear RHIC Spokespersons:

 

I am writing again to update my request for the beam use requests for the coming RHIC runs.

 

In consultation with the Office of Nuclear Physics we have decided to make a change to the future RHIC run schedule. Specifically, we now plan to run RHIC in both FY16 and FY17, followed by one year (FY18) without a RHIC run during which the low energy RHIC electron cooling (LEReC) system will be installed. The high statistics Beam Energy Scan II is then planned for the years FY19 and FY20. The modified plan will allow for a less aggressive schedule of the LEReC project. It will also relax the conflict between the heavy ion and spin physics programs of RHIC that remained unresolved at last year’s PAC meeting.

 

I request that you submit the annual beam use requests by May 19, 2015. The BURs should be for a 22-week RHIC run in FY16, and either a 15-week or a 22-week RHIC run in FY17.

 

I also ask the STAR Collaboration to present the iTPC proposal to the PAC and to provide an update on experimental efforts aimed at exploring possible phenomenological manifestations of the chiral magnetic effect.

 

Both collaborations should present a tentative schedule for the release of results from the data taken in runs 13 and 14 (p+p, Au+Au 15, Au+Au 200, 3He+Au).

 

Thanks, Berndt

Au+Au 200GeV

Last BUR under the assumption of 22 cryo-weeks for Au+Au 200 and pp510. 

run period of Au+Au collisions at at √sNN = 200 GeV, integrating 10 nb-1 of luminosity with rare triggers for Upsilon states, gamma-jet correlations, B->J/ψ and J/ψ production, and 2 billion minbias events for Λc and differential studies of charm flow and correlations. A total integrated luminosity of over 20nb-1 with the combination of run14 and run16 provides the necessary statistics for a measurement of each of the three Upsilon states. We also request to collect 2 billion minimum-bias Au+Au collision events at √sNN = 200 GeV in Run16. A new set of the inner HFT (PXL layers) with Aluminum cables will be installed for run16 with significantly reduced multiple-scattering. We have also proposed to improve the effectiveness of online vertex selection of collisions within the HFT fiducial coverage. The effective figure of merit in terms of statistics for the signal increases by about a factor of 6 for low pT D0 in comparison to the dataset taken in run14 due to reduced cabling material and improved online vertex selection. This significant improvement will allow us to perform differential studies on the charmed hadron yields, flow and correlations in several centralities. More importantly, the high statistics and the improved pointing resolution for low momentum tracks will make the Λc measurement feasible (cτ of Λc ~ 60 μm).

The file discussion issues on the HI running for run-16 is attached in this page.
FV

B->J/Psi projection

Comparison of B->e, B -> D and B->J/psi decay kinematics:

slide 5 in https://drupal.star.bnl.gov/STAR/system/files/1105BMeson_PythiaStuday_BNL_Guannan.pdf by Xin Doing and Guannan Xie

Projectios for B->J/psi RAA

Projections for B->J/psi RAA for BUR16/17 by  Bingchu Huang
Projections for B->J/psi RAA for BUR16/17 with separated uncertainties from Au+Au and p+p baseline by Bingchu Huang




 

 

Projectios for B->J/psi RCP:

R_CP (0-10%/ 40-60%)

R_CP (0-10%/ 60-80%)

BUR15 Executive Summary

The STAR Collaboration makes the following two-year beam-use proposal, in order to achieve its spin and relativistic heavy ion physics goals on a timescale consistent with intense international interest and competition in these areas, as well as to utilize RHIC beams effectively, taking full advantage of recent improvements in machine and detector capability.

Run
Energy
Duration
System
Goals
priority
Sequence
16
ÖsNN=200 GeV
 
 
ÖsNN=62 GeV
 
ÖsNN=19.6 GeV
 
13-wk
 
 
4-wk
 
 1-wk
Au+Au
 
 
Au+Au
 
d+Au
Lc, D v2, RAA, ϒ RAA
10nb-1, 2billion MB
 
   1.2B MB, HT
 
   100M MB
1
 
 
4
 
4
1
 
 
2
 
3
17
Ös = 510 GeV
 
 
ÖsNN=19.6  GeV
 
ÖsNN =200 GeV
 
ÖsNN=200 GeV
11 wk
 
 
 1-wk
 
  3-wk
 
  3-wk
Transverse
  p+p
 
p+p
 
Ru+Ru
 
Zr+Zr
 
AN of W±, g, Drell-Yan,
L=400 pb-1, 55% pol
 
400M MB
 
1.2billion MB
 
1.2billion MB
2
 
 
4
 
3
 
3
1
 
 
2
 
3
 
4

Table 1 1: Scenario I-A: 22 cryoweeks for Run 17 with Isobar sources (Zr, Ru)

STAR’s top priority is to capitalize on the successful installation and operation of the Heavy Flavor Tracker (HFT) and Muon Telescope Detector (MTD) in runs 14 and 15. We request 13 weeks of Au+Au collsions at √s = 200 GeV to collect rare triggers for Upsilon states, gamma-jet correlations, B-->J/Psi and J/Psi production, as well as 2 billion minbias events for Lambda_c and differential studies of charm flow and correlations. The combination of the run 14 and run 16 will allow for measurements of each of the three Upsilon states. The heavy flavor and quarkonium program must be completed to fulfill STAR’s scientific program with the newly HFT and MTD upgrades before BES-II.

STAR’s second priority is to make the first significant measurement of the Sivers sign change and non-perturbative evolution effects in transverse momentum distributions via reconstruction of W+-, Z, direct photon and Drell-Yan single spin asymmetries in transversely polarized √s = 500 GeV p+p collisions. The sign change measurement is a fundamental test of QCD and is being pursued by other experiments, making a timely measurement imperative.

STAR’s third priority is to clarify the interpretation of the observed signatures of the chiral magnetic, wave and vortical effect by making measurements that disentangle signal from background v2 effects. We request a 3 week run each for Ruthinuium (Ru+Ru) and Zirconium (Zr+Zr) collisions. This choice of nuclei is ideal as it allows for a variation in magnetic field at a maximum of 10%, while keeping nearly all other parameters the same. This decisive measurement of the ratio of charge separation in the isobar while reducing the flow background to minimum will greatly advance our understanding of the chiral magnetic effect and has fundamental impact beyond the field of high-temperature QCD.  

Run
Energy
Duration
System
Goals
priority
Sequence
16
ÖsNN=200 GeV
 
 
ÖsNN=62 GeV
 
13-wk
 
 
5-wk
Au+Au
 
 
Au+Au
Lc, D v2, RAA, ϒ RAA
10nb-1, 2billion MB
 
1.2B MB, HT
1
 
 
4
1
 
 
2
17
ÖsNN=510 GeV
 
 
ÖsNN = 14.5
 
ÖsNN = 19.6
 
ÖsNN = 39.0
 
 Ös= 19.6
 
12-wk
 
 
1-wk
 
1-wk
 
1-wk
 
   1-wk
Transverse
p+p
 
d+Au
 
d+Au
 
d+Au
 
     p+p
AN of W±, g, Drell-Yan,
L=400 pb-1, 55% pol
 
80M MB
 
100M MB
 
400M MB
 
           400M MB
1
 
 
4
 
4
 
4
 
4
1
 
 
5
 
4
 
3
 
2

Table 1 2: Scenario I-B: 22 cryoweeks for Run 17 for the case where the Ru-96 source cannot be procured. 

Our next priority is to take data at lower beam energies. There are two programs in our requests in this category:
a) Au+Au collisions at 62 GeV for measurements of inclusive jets and charm spectrum at low energy.
With newly developed analysis techniques, an inclusive jet measurement is possible with minimum-bias data in Au+Au collisions. We have recently extended the measurement to Au+Au at 62GeV with limited statistics from run 10. We request 4 (5) weeks of Au+Au collisions at this energy for an inclusive jet RCP measurement. This will also provide a measurement of D0 spectrum from HFT.
b) d+Au and p+p collisions at low energies for study of onset of high-pt hadron nuclear modification due to jet quenching.
STAR has produced a systematic measurement of RCP of inclusive hadrons as a function of momentum from all the beam energies taken at RHIC. The data has shown a systematic increase of RCP from less than unity to significantly above unity when the beam energy varies from top energy of 200 GeV to 7.7 GeV. A combination of d+Au collisions and p+p collisions at energies where RCP is around unity will help us address the onset of final-state hadron suppression.

Run
Energy
Duration
System
Goals
priority
Sequence
16
ÖsNN=200 GeV
 
 
ÖsNN=62 GeV
 
ÖsNN=19.6 GeV
 
13-wk
 
 
4-wk
 
  1-wk
Au+Au
 
 
Au+Au
 
d+Au
Lc, D v2, RAA, ϒ RAA
10nb-1, 2billion MB
 
1.2B MB, HT
 
100M MB
1
 
 
4
 
4
1
 
 
2
 
3
17
Ös=510 GeV
 
ÖsNN=19.6 GeV
11-wk
 
 1-wk
 
Transverse
p+p
p+p
AN of W±, g, Drell-Yan,
L=400 pb-1, 55% pol
400M MB
2
 
4
2
 
1

Table 1 3: Scenario II: 15 cryoweeks for Run 17 (assume Isobar sources not possible in run16)

In this beam use request the STAR Collaboration presents five compelling scientific programs for the upcoming RHIC runs 16 and 17. We discuss the highlights from the scientific publications, on-going analyses and detector performance from recent runs. We have also outlined the planned upgrades in the next few years in preparation for the Beam Energy Scan II.

Chiral Magnetic Effect theory and data projections

Hi, Zhangbu and all

I've received the magnetic field calculation results for isobar collisions from the theorists, http://www.star.bnl.gov/protected/bulkcorr/rexwg/flow/Parity/CuAu/By_cent.pdf

and I made a projection of our measurements.
http://www.star.bnl.gov/protected/bulkcorr/rexwg/flow/Parity/CuAu/PlotIsobar.gif

If we have 250M minbias events in each case, there will be a sweet zone of 20-50% centrality,
where the difference between Ru+Ru and Zr+Zr is a 3 sigma effect (combining centralities).

Gang Wang
Department of Physics & Astronomy UCLA

BUR writeup:
summary of the experimental study on chiral magnetic effect

Highlights and Accomplishments since last BUR

 

BES (14.5GeV)

  

-----Original Message-----
From: Nu Xu [mailto:nxu@lbl.gov]
Sent: Friday, April 17, 2015 5:26 AM
To: Xu, Zhangbu
Cc: Xu Nu; Alexander Schmah; Cebra Dan; Kikola Daniel; Aschenauer, Elke; Sichtermann Ernst P; Videbaek, Flemming; Geurts Frank; Fuqiang Wang; Wang Gang; Caines Helen; Drachenberg Jim; Ruan, Lijuan; Fatemi Renee; Dong Xin
Subject: Re: BUR16 committee meeting Firday 2-3PM

 

Hi Zhangbu and All,

 

Below is the list of analysis topics (and presentations) from14.5GeV Au+Au collisions. I have divided those topics into four groups, according to their phsics relevance. Roughly, about 50% of these topics are close to the end the analysis phase, however, almost all of them are awating for the results of the 14.5 GeV embedding.

 

As many of you know that we will have our first discussion (via BlueJeans) on the 14.5GeV data analysis status next Monday, April 20th. After the meeting, we plan to prepare for a summary including the “status/problem/plan for presentation and paper” for all topics. 

 

Status of the14.5GeV data analysis

 

I. v1, v2, …vn:

(1) Paul S  - Charged particle v_n

https://drupal.star.bnl.gov/STAR/system/files/sorensen_14.5GeV_update.pdf

(2) Yadav  -  v1 for p, pbar,  pi+/-,  K+/- http://www.star.bnl.gov/protected/bulkcorr/ypandit/KaonPaper/BulkCorr/Kaonv1Paper_Update.pdf

(3) Shusu  - identified particle v2:

http://www.star.bnl.gov/protected/bulkcorr/sss/14.5_flow/14.5_v2_BES_dis.pdf

 

II. Spectra and RCP:

(1) Yu Ning and Rihan – Light nuclei production http://www.star.bnl.gov/protected/lfspectra/yuning/BES/pwg/B2-15-20150420.pdf

(2) Stephen Horvat and Daniel Brandenburg - R_cp http://www.star.bnl.gov/protected/lfspectra/jdb/run14/AuAu15/pidRcp/pidRcp_Update_April_10_lfs.pdf

(3) Spectra of pion, Kaon, Proton, K0, L, Xi, Omega from 14.5 GeV Au+Au collisions ??? (no responses yet from co-conveners)

 

III. High moments:

(1) Xiaofeng Luo and Jochen Thaeder  - net-charge/net-proton moments http://www.star.bnl.gov/protected/bulkcorr/luoxf/PWG_discussion/Moments_14.5GeV_20150408.pdf

(2) Ji Xu  - High moments net kaons

http://www.star.bnl.gov/protected/bulkcorr/xuj/bulk/JiXu_netK_moments_Version2.pdf

 

IV: Chiral effects:

(1) Gang Wang  -  CME/CMW:

http://www.star.bnl.gov/protected/bulkcorr/rexwg/flow/Parity/BES/14.5GeV_04082015.pdf

 

 

Best regards,

 

Nu Xu

Progress on production with the Muon Telescope

 http://www.star.bnl.gov/protected/lfspectra/ruanlj/MTD/STAR_BUR1617_MTD_ProductionStatusRun1415.pdf

Low Energy Reference datasets

Au+Au62 ( I think PHENIX is proposing He3+Au for run16 at various energies)
Daniel, Alex and Helen on jets and HF 
Do we need pp reference or all out for Au+Au


p+A 

Helen, Cebra and Alex on R_PA, flow (lower energies)?  
Elke on Spin 

dielectron: 
https://drupal.star.bnl.gov/STAR/blog/yiguo/error-projection-196-dau-die-spectra 

Luminosity projections and Discussions

  new CAD projection:
CAD Projection in 04/17/2015

 

NPE in Au+Au 62.4 GeV - current results and prospects for run 16

 

NPE spectra and v2 results from Au+Au 62.4 GeV run 2010

Slides 18 - 21 in Recent Open Heavy Flavor Results from STAR (RHIC & AGS Annual Users' Meeting 2014)

Prospects for NPE meausrements in Au+Au 62.4 GeV in run 16

(based on inputs from Mustafa Mustafa and James Dunlop)

Attached are ratio plots from theory calculations. The total charm cross-section is down by a factor of 4-5. However the pT dependence is not trivial.

The pT of charm quarks is, consequentially, harder at 200 GeV. For R_cp measurement it means a factor of 15-20 drop in cross-section in the interesting pT region (4-10 GeV/c).
For NPE v2 the harder spectra at 200 GeV implies that the NPE cross-section is down by a factor of 10 at low electron pT due to the feed down from higher pT.

Regarding the background, the NPE background is down by only a factor of 2 (with some pT dependence) from 200 to 62GeV.

Taking the projections for top energy of min/max = 2.2 nb^-1/ 3.6 nb^-1  /week (take the middle, say 3 nb^-1) at 200 GeV, then scaling down by E^2, would expect
~1.5 nb^-1 in such a run, delivered.  Take 70% live, ~1 nb^-1.

So a factor of 5 or so possible vs. what was triggered in Run 10, also perhaps somewhat better with a narrower vertex distribution.
Not clear whether that pushes us over the top to make the HT trigger useful. It is unlikely, especially with the worse NPE S:B with the HFT.

Raw inclusive electron spectrum from Run 10 data:

Charm cross section in p+p 200 GeV vs p+p 62.4 GeV



Run16 and AuAu62 GeV estimate

I had one presentation in the last Nov collaboration meeting talking about the Run16 and some quick estimate on the D0 v2 from 5 weeks of AuAu62 GeV

Au+Au62 estimate from HFT

comments and feedbacks

 -----Original Message-----

From: Sorensen, Paul
Sent: Friday, May 15, 2015 1:31 AM
To: Xu, Zhangbu
Subject: Re: [Starpapers-l] STAR Beam Use Request for run 16 and 17 (ready for collaboration feedback)

 

I believe there is an error in table 5-2. Isn’t CME supposed to be larger > in Ru+Ru and Zr+Zr? It’s listed as smaller < now.

 

-----Original Message-----
From: Carl Gagliardi [mailto:c-gagliardi@tamu.edu]
Sent: Thursday, May 14, 2015 4:05 PM
To: Xu, Zhangbu
Cc: rfatemi@pa.uky.edu; Aschenauer, Elke; 'Carl Gagliardi'
Subject: RE: [Starpapers-l] STAR Beam Use Request for run 16 and 17 (ready for collaboration feedback)

 

Hi Zhangbu (cc Renee and Elke):

 

I've made it through Sections 1-3 of the Version 12 draft BUR.  I systematically tried to ignore minor typos, etc., that the PAC members should be able to read over easily.  With that "constraint", I only have a few comments or suggestions so far:

 

--  General remark:  The "Highlights" section is written in a very brief style.  That's a good thing!  But it means the knowledge that the PAC members bring to the document will be critical for them to understand everything that's discussed.  (I admit that I had difficulty following some of the heavy ion highlights.)  During your PAC talk, I could imagine that you might get one or two "what's this mean?" questions from members who had their interest tweaked by some figure.  Be prepared.

 

--  Pg 13, Fig 2-9 caption:  j_T is ~ Delta R * z * pT_jet.  (The "* z" is

missing.)

 

--  Pg 16, 2nd last line:  "... into a new ERA of lower x ..."  (Admittedly, this is just a typo.  But I simply couldn't read past it!)

 

--  Pg 33:  The upper paragraph says Fig 3-2 is from p-Au running.  The figure caption says it's from p+p.  I suspect the figure caption is correct.

 

--  Pg 36:  At present, there is nothing in Sect 3.2 "Dataset for inclusive jet and dijet A_LL".  I'm probably the internal expert here.  Should I prepare a brief paragraph and update our standard Runs 9+15 projection plot for insertion here?

 

I'll drop you another note after I've made it through the Runs 16&17 requests sections.  I might (or might not) also have some retrospective suggestions for the Executive Summary at that time.

 

Carl

 

pp,pA BUR

 request and projections for pp,pA

STAR run18-19 BUR

STAR run16-17 BUR

CAD luminosity projection: 

http://www.rhichome.bnl.gov/RHIC/Runs/RhicProjections.pdf

Overleaf document: 
https://www.overleaf.com/8923878fsmcynsyyzjh#/33782541/

weekly meeting eZuce: 
Title: STAR BUR 2017
Description:
Community: STAR
Meeting type: Open Meeting (Round Table)
Meeting Access Information: eZuce SRN
Application http://srn.ezuce.com/joinSRN?meeting=M2MvMB2a2iDMDl929MDM92
Mobile App : Meeting ID: 102 3196
or Link: http://srn.ezuce.com/join?meeting=M2MvMB2a2iDMDl929MDM92
- Phone Bridge ID: 102 3196

The Charge:

<<<<<<<<<<< 

 

A call for the annual Beam Use Request (BUR) has been sent from the BNL ALD to RHIC collaborations on February 22nd, 2017:

Dear RHIC Spokespersons:

I am writing to request that you submit the annual Beam Use Request for the RHIC runs in 2018 and 2019 and related material.

We current[ly] anticipate, assuming favorable budget conditions, runs of 15 cryo-weeks (including 2 weeks for the Coherent electron Cooling test)

in FY2018 and 24 cryo-weeks each in FY2019 and FY2020. Less favorable,

but still likely, budget conditions would only allow for a combined run of 20 cryo-weeks in FY2019/FY2020. The beam use request should consider both possibilities.

The submissions are due by May 15, 2017. Since PHENIX has ended data taking and sPHENIX has not yet started construction, I expect to receive a BUR from STAR only.

I also ask STAR and PHENIX to report on the status of analyses of data from previous RHIC runs, especially the runs in 2014-16.

In addition, I invite the STAR and sPHENIX Collaborations to present letters of intent for proposals of modest forward upgrades to their detectors for data taking after 2021 for consideration by the PAC.

The dates for this year’s RHIC PAC meeting are June 15-16, 2017. A list of current PAC members is attached for your information.

Thanks in advance,

Berndt

 

The Committee and responsibilities

Gang, Voloshin, Prithwish (Isobar for CME), 
Grazyna,  Geurts (BES-II), 
Cebra, Herrmann (FXT), 
Lisa, Xiaofeng (general), 
Zhenyu, Gregory (HI accomplishments), 
Renee, Oleg Eyser (Spin accomplishments), 
Flemming, Rosi (Detector upgrades)
Jerome (software)

Summary Tables

 Summary Tables 

for run 18 

year Energy  time species events priority sequence

 

18

 

200 GeV

 

200 GeV

 

27 GeV

  3.5-wk

 

  3.5-wk

 

  2-wk

Ru+Ru

 

Zr+Zr

 

Au+Au

1.2B MB

 

1.2B MB

 

>500M MB

1

 

1

 

2

2

2

3

3.0 GeV (FXT)

  3 days?

Au+Au

100M MB

3

1

for run 19 

energy  weeks Events FXT days events
19.6 4.5 400M 4.5 2 100M
14.5 5.5 300M 3.9 2 100M
11.5 5? 230M 3.5 2 100M
62.4     7.7 2 100M
39     6.2 2 100M
27     5.2 2 100M

Plan B: 

energy  weeks Events FXT days events
19.6 4.5 400M 4.5 2 100M
14.5 5.5 300M 3.9 2 100M
27 isobar 5? ? 3.5 2 100M
62.4     7.7 2 100M
39     6.2 2 100M
27     5.2 2 100M

for run 19/20 combined

for run 20 (after run 19)

energy weeks Events FXT days events
11.5 5 230M 3.5 2 100M
9.1 9.5 160M 3.2 2 100M
7.7 12 100M      


upgrade comments

Flemming

I can certainly do the iTPC and eTOF; In the previous BUR the upgrade parts are actually quite short 
e.g. the fpost just one-half page with layout. the BES_II stuff ~ 2 pages

I would suggest possibly the following:

Have a brief intro that shows the upgarde in the star detector context (one nice overview figure that is used ofte
here we could show one of the acceptance plots that shows what iTPC and eTFO brings acceptance

For iTPC 1 page
— brief description, status readiness for run-19
    picture of prototype from SDU, 

for EPD
since its coming up for run-18 some more info maybe 1.5 -2 pages

pics: detailled layout, picture of 1/8 quadrant installed, a simple performance plot from this year
The effect of improvement in event plane response realy is for the physics performance plots, that has such included.

eTOF one page
placement, status, performance


Rosi
That sounds reasonable.  Yes, I think an overview would show the
picture of STAR with all the detectors included + additional PID
acceptance gain since that is the point.

It should be rather easy to write a couple pages on the EPD with the
performance figures.  We have the performance improvement figures in
our proposal already, and then with something from this year's data
that should be good.

Will the etof have any performance plots from this year?

Perhaps we can pick one of the observables from BES I and show the
improvement with all three detectors.  Alex had already done something
like this for the lambda polarization (since I end up listening to
Mike a lot this is the one I'm most familiar with).  Though perhaps
this would be more appropriate in the sections discussing the physics.


STAR run19-20 BUR

 STAR Beam Use Request for Runs 19 and 20

 

Guidance

Meeting Schedule & Time Line

  • Weekly committee meetings on Tuesday at noon (EST)
    • Blue Jeans information:
      To join the Meeting:
      https://bluejeans.com/406880288
      
      To join via Room System:
      Video Conferencing System: bjn.vc -or-199.48.152.152
      Meeting ID : 406880288
      
      To join via phone :
      1)  Dial:
      	+1.408.740.7256 (United States)
      	+1.888.240.2560 (US Toll Free)
      	+1.408.317.9253 (Alternate number)
      	(see all numbers - http://bluejeans.com/numbers)
      2)  Enter Conference ID : 406880288
      
  • Time lines:
    • draft document to Collaboration: April 17, 2018
    • final version to Berndt Mueller: May 1, 2018
    • NPP 2018 PAC Meeting: June 7-8, 2018
BUR 2018 Document

Official charge Berndt Mueller

Dear Helen & Zhangbu:

I am writing to solicit the STAR beam use requestDear Helen & Zhangbu:
 
I am writing to solicit the STAR beam use request for RHIC Run-19
and to request presentations from the RHIC collaborations at this
year’s PAC meeting, which is scheduled to be held on June 7-8, 2018.
 
We anticipate the 2019 RHIC run to constitute the first year of a
two-year high statistics beam energy scan. The 2017 PAC assigned
highest priority to proposed Au+Au runs at 11.5, 14.5, and 19.6 GeV, 
interleaved by brief fixed target runs at the same beam energies, as 
well as dedicated fixed target runs corresponding to CM energies of 
7.7, 6.2, and 5.2 GeV. The PAC tentatively recommended Au+Au runs in 
the collider mode at 9.1 and 7.7 GeV during the 2020 RHIC campaign.
 
The STAR collaboration should not simply take these tentative 
recommendations as a given, but reconsider and justify the prioritized 
set of beam energies and the requested accumulated statistics at each 
energy, assuming either 24 cryo-week runs or 20 cryo-week runs in each 
of the years 2019 and 2020.
 
The beam use request should be submitted no later than May 1 in order 
to allow the PAC members to study it in detail before the meeting.
 
In addition to the STAR beam use request, I would like to request
updates from STAR and PHENIX on the status of results from Runs 14-17 
(14-16 for PHENIX).
 
Best regards
Berndt


2018 PAC Committee Members 

M. Gazdzicki (CERN), J. Harris (Yale, chair), H. Huang (UCLA),
V. Koch (LBNL), J.C. Peng (UIUC), S. Pratt (MSU), K. Rajagopal (MIT),
A. Schaefer (Regensburg), M. Stephanov (UIC), and J. Velkovska (Vanderbilt).
Scott Pratt has indicated that he will not be able to attend.


STAR BUR Committee and Assignments

Physics Highlights (Runs 14-17) -- Heavy Ion: Xin Dong, Takafumi Niida; Spin: Kevin Adkins
Run 19/20 Requests --  BES2: Declan Keane, Grazyna Odyniec, Sevil Salur; FXT: Daniel Cebra, Terry Tarnowsky
Upgrades           --  Flemming Videbaek, Rosi Reed, Ingo Deppner
Ex-Officio         --  Helen Caines, Zhenyu Ye, Jim Drachenburg, Grigory Nigmatkulov
Frank Geurts (chair)

STAR run20-21 BUR

 STAR Beam Use Request for Runs 20 and 21

 

Guidance

Meeting Schedule & Time Line

  • Weekly committee meetings Tuesday noon - 1h30pm (EDT)
    • Blue Jeans information:
      To join the Meeting:
      https://bluejeans.com/625208370
      
      To join via Room System:
      Video Conferencing System: bjn.vc -or-199.48.152.152
      Meeting ID : 625 208 370
      
      To join via phone :
      1)  Dial:
      	+1.408.740.7256 (United States)
      	+1.888.240.2560 (US Toll Free)
      	+1.408.317.9253 (Alternate number)
      	(see all numbers - http://bluejeans.com/numbers)
      2)  Enter Conference ID : 625 208 370
      
  • Time lines:
BUR 2019 Document

Official charge Berndt Mueller

Dear Helen & Zhangbu:
 
I am writing to solicit the STAR beam use request for RHIC Run-20
and to request presentations at this year’s PAC meeting, which is 
scheduled to be held on June 10-11, 2019.
 
The 2020 RHIC run will be the second year of the planned three-year
high statistics beam energy scan. The 2018 PAC tentatively assigned 
first priority for Runs 20-21 to data taking in the collider mode at 
7.7, 9.1, and 11.5 GeV CM energy, accumulating at least 100M, 160M,
and 230M min bias events, respectively. Second priority was assigned
to fixed target runs at 3.0, 3.2, 3.5, 5.2, and 6.2 GeV CM energy, 
acquiring at least 100M events at each energy, but did not specify 
exactly in which order these data should be taken, except that 
optimal use should be made of the availability of electron cooling 
to enhance the luminosity.
 
The STAR collaboration should not simply take these tentative 
recommendations as a given, but reconsider and justify the 
prioritized set of beam energies and the requested accumulated 
statistics at each energy, assuming either a 24 or a 28 cryo-week 
run in FY2020, followed by a 20 cryo-week run in FY2021. STAR should
also consult with C-AD about the number of dedicated LEReC 
commissioning weeks that are expected to be required during Run-20.
 
In addition, I request presentations on (i) the status of data 
analysis from previous RHIC runs and (ii) an update of the physics
goals for a short (16 cryo-weeks) forward Spin physics run in FY22 
with 500 GeV p+p collisions. 
 
The beam use request should be submitted no later than May 15 in 
order to allow the PAC members to study it in detail before the meeting.
 
Best regards
Berndt


2019 PAC Committee Members 

Masayuki Asakawa, John Harris (chair), Huan Huang, Volker Koch, Jen-Chieh Peng,
Scott Pratt , Krishna Rajagopal, Mikhail Stephanov, Julia Velkovska

Bill Christie, Dmitri Denisov, Jamie Dunlop, Wolfram Fischer, Achim Franz (secretary),
Hong Ma, Berndt Mueller, Thomas Roser, Ferdinand Willeke


STAR BUR Committee and Assignments

Physics Highlights -- 
  - Heavy Ions: Aya Jowzaee (BulkCorr), Hanseul Oh (Jets), Xin Dong (HF), Sevil Salur (Spectra);
  - Spin: Zilong Chang, Jinlong Zhang
BES-II Upgrades:  Irakli Chakaberia (iTPC), Tetyana Galatyuk (eTOF), Rosi Reed (EPD)
BES-II Run Plan: Dan Cebra, Declan Keane, Grazyna Odyniec, Xiaofeng Luo
pp 500 Run Plan: Jim Drachenberg, Elke Aschenauer, Scott Wissink
The case for AuAU at 16.7GeV: Xiaofeng Luo
The case for a small systems scan: Wei Li

Ex-Officio    --  Helen Caines, Zhenyu Ye, Grigory Nigmatkulov
Chair         --  Frank Geurts

STAR run21-25 BUR

 

Background Materials

BUR Documents (2020)


Meeting Schedule & Time Line

Official charge Berndt Mueller

For the Sept 10-11 meeting of the PAC I would like you to prepare the following documents and presentations:
 
STAR: Beam Use Request for Run-21 and Run-22
STAR and sPHENIX: Beam Use Requests for Runs 23-25
 
The BURs should be based on the following number of expected cryo-weeks:
2021:  24 (28)
2022:  20
2023:  24 (28)
2024:  24 (28)
2025:  24 (28)
 
Presentations only:
STAR: Update on spin physics and isobar run analyses
PHENIX: Update on ongoing analysis efforts and data archiving effort
sPHENIX: Update on EIC EoI based on sPHENIX
 
The Beam Use requests should be submitted in written form no later than August 31. 2020.
 
Thank you, Berndt


2020 PAC Committee Members (Mini-PAC version)

John Harris (chair), Huan Huang, Volker Koch, Jen-Chieh Peng,
Scott Pratt , Krishna Rajagopal, Mikhail Stephanov, Julia Velkovska

Maria Chamizo‐Llatas, Bill Christie, Dmitri Denisov, Jamie Dunlop, Wolfram Fischer, Achim Franz (secretary),
Hong Ma, Berndt Mueller, Thomas Roser, Ferdinand Willeke

STAR BUR Committee and Assignments

Two-year BUR: Run-21 BES-II completion:  Daniel Cebra, Frank Geurts, Grazna Odyniec, Flemming Videbaek
Run-22 pp500: Bernd Surrow, Carl Gagliardi, Renee Fatemi, Scott Wissink, Elke Aschenauer

STAR in sPHENIX Era (Runs 23-25): Run-23,25 AuAu200: Heavy-ion running during sPHENIX Subcommittee (Frank Geurts,chair)
Run-24 pp200,pAu200: Cold QCD folks

Other programs:
O+O: Jiangyong Jia and Wei Li
Au-Au 17 GeV: Xiaofeng Luo
FXT extension to higher and lower energies: Daniel Cebra

Physics Highlights:
Cold QCD: Maria Zurek and Ting Lin
Jets: Raghav Elayavalli
BulkCorr+Isobar: Prithwish Tribedy, Niseem Magdy, Jiangyong Jia
LFSUPC: Daniel Cebra
Heavy Flavor: Matt Kelsey

BESII+FXT progress: Daniel Cebra, Flemming Videbaek

Upgrades:
eTOF: Ingo Deppner
Forward : Elke Aschenauer, Scott Wissink, Flemming Videbaek

Ex-Officio    --  Helen Caines, Lijuan Ruan, Xin Dong, Rongrong Ma, Takafumi Niida
Chair         --  Ken Barish 

sPHENIX Running Periods (Runs 23-25)

 
.

Subcommittee on STAR in Runs 23-25

STAR in sPHENIX Era (Runs 23-25) :: BUR subcommittee


Run-23/25 AuAu200 Heavy-Ion running

Background Materials

PWG Inputs

Relevant guidance from Berndt Mueller

[...]

STAR and sPHENIX: Beam Use Requests for Runs 23-25

The BURs should be based on the following number of expected cryo-weeks:
2021: 24 (28)
2022: 20
2023: 24 (28)
2024: 24 (28)
2025: 24 (28)

[...]

Tasks/Assignments

JetCorr		: Raghav Kunnawalkam Elayavalli, Peter Jacobs, Nihar Sahoo
BulkCorr	: Jiangyong Jia, Prithwish Tribedy, Fuqiang Wang
Heavy Flavor	: Zebo Tang, Barbara Trzeciak
LFS-UPC		: Tetyana Galatyuk, Wangmei Zha, Frank Geurts

chair: Frank Geurts


STAR run22-25 BUR

  Background Materials

BUR Documents (2021)

Meeting Schedule & Time Line

  • Time line:
    • draft document to Collaboration: May 24, 2021
    • final version to ALD: May 31, 2021
    • NPP 2021 PAC Meeting: June, 22-23 2021
  • Weekly committee meetings Friday 11:30 - 1 pm (EDT)
Meeting URL: https://ucr.zoom.us/j/92406695796?pwd=NmhLTzFFMWZTT0VXWGRMRWtFK3BZdz09

BNL Nuclear Physics PAC 2021 Charge and Agenda

Agenda: 
* STAR: Beam Use Requests for Runs 22-25 
* sPHENIX: Beam Use Requests for Runs 23-25 
* CeC: Beam Use Requests 

The BURs should be based on the following number of expected cryo-weeks. 
First number is minimal expected RHIC run duration and second number is optimal duration: 
2022: 18 (20) 
2023: 20 (28) 
2024: 20 (28) 
2025: 20 (28)


2021 PAC Committee Members 

Roberta Armaldi, John Harris (chair), Huan Huang, Volker Koch, Jen-Chieh Peng, Scott Pratt, 
Krishna Rajagopal, Ana Stasto, Mikhail Stephanov, Julia Velkovska 

Maria Chamizo‐Llatas, Bill Christie, Dmitri Denisov, Jamie Dunlop, Wolfram Fischer, Achim Franz (secretary), 
Hong Ma, Michiko Minty, Thomas Roser, Ferdinand Willeke

STAR BUR Committee and Assignments

Run-22 p+p 500 GeV:  
Run-24 p+p and p+A 200GeV:
Run-23,25 Au+Au 200GeV: Niseem, Toshihiro, Zhenyu, ... 

Other programs: 

Highlights: 
Cold QCD: CME Search and Isobar Run: 
Jets: 
Bulk Correlations: Niseem LFSUPC: 
Heavy Flavor: Zhenyu Run-22 Performance 

Upgrades: Forward : Akio, Zhenyu 

Ex-Officio -- Helen Caines, Lijuan Ruan, Xin Dong, Rongrong Ma, Takafumi Niida 
Chair -- Ken Barish 

STAR run23-25 BUR

Background Materials

BUR Documents (2022)

Meeting Schedule & Timeline

Meeting URL: https://ucr.zoom.us/j/99988762586?pwd=eWlTOXNxanphZHdLdHhoTGpTUkZPdz09
  • Meeting recordings
  • Friday, Apr 1, 2022
    • Timeline Update (official charge => BUR due to PAC on 5/6 and draft to collaboration a week earlier)
    • Last year's sampled luminosity assumptions (Rongrong's email)
    • DAQ Upgrade (Flemming)
    • Assumptions for BUR (discussion)

BNL Nuclear Physics PAC 2021 Charge and Agenda

* STAR: Beam Use Requests for Runs 23-25 
* sPHENIX: Beam Use Requests for Runs 23-25 
* CeC: Beam Use Requests 

The Beam Use Requests should be submitted in written form to PAC by May 6, 2022
The BURs should be based on the following number of cryo-weeks. The first number is the 
proposed RHIC run duration for scenario 1 and the second number corresponds to optimal 
duration (scenario 2) presented to the DOE-ONP in BNL’s FY24 Lab Managers’ Budget Briefing:
 
2023: 24 (28) 
2024: 24 (28) 
2025: 24 (28)


2022 PAC Committee Members

Roberta Armaldi, John Harris (chair), Huan Huang, Leticia Cunqueiro Mendez, Jen-Chieh Peng, 
Scott Pratt, Krishna Rajagopal, Claudia Ratti, Mikhail Stephanov, Julia Velkovska 

Maria Chamizo‐Llatas, Bill Christie, Dmitri Denisov, Jamie Dunlop, Wolfram Fischer, 
Achim Franz (secretary), Haiyan Gao, Hong Ma, Michiko Minty

STAR BUR Committee and Assignments

Daniel Brandenburg, Daniel Cebra, Raghav Kunnawalkam Elayavalli, Frank Geurts, Jiangyong Jia, 
Xiaofeng Luo, Niseem Magdy, Sooraj Radhakrishnan, Nihar Sahoo, Sevil Salur, Subhash Singha, 
Barbara Trzeciak, Fuqiang Wang, Yi Yang, Zhenyu Ye, Hanna Zbroszczyk

Assignments (Google Doc)

Ex-Officio -- Helen Caines, Lijuan Ruan, Xin Dong, Rongrong Ma, Takafumi Niida 
Chair -- Ken Barish 

STAR run24-25 BUR

  

BUR Documents (2023 May) for Run24-25

Background Materials

Meeting Schedule & Timeline

  • Time line:
    • draft for internal review: July 28, 2023
    • for management review: August 4, 2023 
    • draft document to Collaboration: August 11, 2023
    • final version to ALD: August 18, 2023
    • NPP 2023 PAC Meeting: September, 11-12 2023
      • NPP PAC Meetings
  • Weekly committee meetings Tuesday 11 am - 12:30 pm (EDT)

Meeting URL: https://bnl.zoomgov.com/j/1602989668?pwd=dFhzajk3SGduQnk4dGMvMGlhV0FCQT09

  • Tuesday, August 8, 2023
Passcode: 6cQ.W=N6

1) General update -run23 ended early, 6 weeks Au+Au included into Run24, pp/pAu plan unchanged
2) Updated goals for run25 AuAu program (Rongrong), keep the projection plots unchanged.
3) 
Remaining discussion with Run24 if any
4) Plan to release BUR to collaboration on August 11. 
  • Tuesday, August 1, 2023
https://bnl.zoomgov.com/rec/share/bm0MLXU85u2h6I17eB1qWIW_C57taFl68dkW8OZZafAjnnDbvOZA9A4F_9fKLUgY.9Xrq8_5ej18qrA43 
Passcode: DP0%GsCG
1) General update -deadline for management review: August 4th
2) New luminosity projection for run24 from CAD (Elke, all), -CAD released RHIC luminosities Aug. 7.
3) Run24 pp/pAu splitting update (Xiaoxuan, all) - updated projection based on new CAD lumi
4) Summary on projections of run25 (Rongrong and all)
5) Internal review status (all)

 
  • Tuesday, July 25, 2023
Passcode: bX%+D9&i
1) General update : internal deadline July 28
2) Discussion on set-up time for run24 (Elke and all)  ---5.5weeks set up for run24
3) Run24 pp/pAu splitting, radial polarization (Xiaoxuan, all) --- wait for new CAD luminosity projection
4) Update on projections of run25 (Rongrong and all) - final version of projections
5) Status of BUR writing- each subsection 
https://www.overleaf.com/6843367976ddhtqhqsdzqq
6) Internal review assignments:
https://docs.google.com/document/d/1812pdhU-Qdni0P-S8SZJ_IYehpfzy3gqDWYcTEhzFkU/edit

7) Anything else outstanding

  • Tuesday, July 18, 2023
Recordingshttps://bnl.zoomgov.com/rec/share/GsXzJV4TRY7zUNwZ2fB5PHF4Qi1AFV7O2ef7wBwS8OLS9oHGYDo0iA0nmYBPYg_z.q456xg3OxteG8UnB
Passcode: +4MhWQg^
1) General update - deadline July 21
2) Update & discussion on projections of run25 (Rongrong and all)
3) Update on projections of run24 (Xiaoxuan and all)
4) Status of BUR writing- each subsection 
5) Internal review assignments
  • Tuesday, July 11, 2023
1) General update -internal deadline July 21
2) Update on projection
3) Status and planning of BUR writing - each subsection
1) General update: internal deadline July 21 
2) Highlights from CF pwg (Hanna)
3) Highlights from light flavor & UPC (Daniel C)
4) discussion on radial polarization with Run24 (Jae, Carl)
1) General update
2) Cold-QCD highlights & projections on di-hadron correlation in pp/pAu   (Xiaoxuan )
3) Other updates: skip meeting of June 20 due to ovelap with trigger board mtg 
4) Presentation in next meeting on June 27 ((Hanna, Daniel). 

BNL Nuclear Physics PAC 2023 Charge and Agenda

 
*sPHENIX: Beam Use Requests for Runs 24-25
*STAR: Beam Use Requests for Runs 24-25

The Beam Use Requests should be submitted in written form to PAC by August 11, 2023 by emailing Fran and copy Haiyan and John the BUR directly or provide a link to access the BUR before the due date.

The BURs should be based on the following number of cryo-weeks. For Run 2024, we ask that you consider three scenarios for 20, 24 and 28 cryo-weeks each, given the uncertain budgetary situation. For Run 2025, the first number is the proposed RHIC run duration for scenario 1 and the second number corresponds to optimal duration (scenario 2) presented to the DOE-ONP in BNL’s FY25 Lab Managers’ Budget Briefing:

2024: 20/24/28
2025: 24 (28)
 

Note the eventual running cryo-weeks for each run will depend on the final budget guidance for that year.

2022 PAC Committee Members (to be updated 2023):  

Roberta Armaldi, John Harris (chair), Huan Huang, Leticia Cunqueiro Mendez, Jen-Chieh Peng,
Scott Pratt, Krishna Rajagopal, Claudia Ratti, Mikhail Stephanov, Julia Velkovska

Maria Chamizo‐Llatas, Bill Christie, Dmitri Denisov, Jamie Dunlop, Wolfram Fischer,
Achim Franz (secretary), Haiyan Gao, Hong Ma, Michiko Minty

STAR BUR Committee and Assignments:

Elke Aschenauer, Daniel Brandenburg, Daniel Cebra, Oleg Eyser, Carl Gagliardi, Jiangyong Jia, Rongrong Ma, Niseem Magdy, Alex Jentsch, Fuqiang Wang, Hanna Zbroszczyk
 
Ex-Officio -- Lijuan Ruan, Frank Geurts, ShinIchi Esumi, Sooraj Radhakrishnan, Barbara Trzeciak
Chair -- Qinghua Xu

STAR run24-25 BUR- backup



Background Materials

BUR Documents (2023 May) for run24-25

Meeting Schedule & Timeline

  • Time line:
    • draft for internal review: July 28, 2023
    • for management review: August 4, 2023 
    • draft document to Collaboration: August 11, 2023
    • final version to ALD: August 18, 2023
    • NPP 2023 PAC Meeting: September, 11-12 2023
      • NPP PAC Meetings
  • Weekly committee meetings Tuesday 11 am - 12:30 pm (EDT)

Meeting URL: https://bnl.zoomgov.com/j/1602989668?pwd=dFhzajk3SGduQnk4dGMvMGlhV0FCQT09

  • Tuesday, August 1, 2023
https://bnl.zoomgov.com/rec/share/bm0MLXU85u2h6I17eB1qWIW_C57taFl68dkW8OZZafAjnnDbvOZA9A4F_9fKLUgY.9Xrq8_5ej18qrA43 
Passcode: DP0%GsCG
1) General update -deadline for management review: August 4th
2) New luminosity projection for run24 from CAD (Elke, all)
3) Run24 pp/pAu splitting update (Xiaoxuan, all) - updated projection based on new CAD lumi
4) Summary on projections of run25 (Rongrong and all)
5) Internal review status (all)
  • Tuesday, July 25, 2023
Passcode: bX%+D9&i
1) General update : internal deadline July 28
2) Discussion on set-up time for run24 (Elke and all)  ---5.5weeks set up for run24
3) Run24 pp/pAu splitting, radial polarization (Xiaoxuan, all) --- wait for new CAD luminosity projection
4) Update on projections of run25 (Rongrong and all) - final version of projections
5) Status of BUR writing- each subsection 
https://www.overleaf.com/6843367976ddhtqhqsdzqq
6) Internal review assignments:
https://docs.google.com/document/d/1812pdhU-Qdni0P-S8SZJ_IYehpfzy3gqDWYcTEhzFkU/edit

7) Anything else outstanding

  • Tuesday, July 18, 2023
Recordingshttps://bnl.zoomgov.com/rec/share/GsXzJV4TRY7zUNwZ2fB5PHF4Qi1AFV7O2ef7wBwS8OLS9oHGYDo0iA0nmYBPYg_z.q456xg3OxteG8UnB
Passcode: +4MhWQg^
1) General update - deadline July 21
2) Update & discussion on projections of run25 (Rongrong and all)
3) Update on projections of run24 (Xiaoxuan and all)
4) Status of BUR writing- each subsection 
5) Internal review assignments
  • Tuesday, July 11, 2023
1) General update -internal deadline July 21
2) Update on projection
3) Status and planning of BUR writing - each subsection
1) General update: internal deadline July 21 
2) Highlights from CF pwg (Hanna)
3) Highlights from light flavor & UPC (Daniel C)
4) discussion on radial polarization with Run24 (Jae, Carl)
1) General update
2) Cold-QCD highlights & projections on di-hadron correlation in pp/pAu   (Xiaoxuan )
3) Other updates: skip meeting of June 20 due to ovelap with trigger board mtg 
4) Presentation in next meeting on June 27 ((Hanna, Daniel). 

BNL Nuclear Physics PAC 2023 Charge and Agenda

 
*sPHENIX: Beam Use Requests for Runs 24-25
*STAR: Beam Use Requests for Runs 24-25

The Beam Use Requests should be submitted in written form to PAC by August 11, 2023 by emailing Fran and copy Haiyan and John the BUR directly or provide a link to access the BUR before the due date.

The BURs should be based on the following number of cryo-weeks. For Run 2024, we ask that you consider three scenarios for 20, 24 and 28 cryo-weeks each, given the uncertain budgetary situation. For Run 2025, the first number is the proposed RHIC run duration for scenario 1 and the second number corresponds to optimal duration (scenario 2) presented to the DOE-ONP in BNL’s FY25 Lab Managers’ Budget Briefing:

2024: 20/24/28
2025: 24 (28)
 

Note the eventual running cryo-weeks for each run will depend on the final budget guidance for that year.

2022 PAC Committee Members (to be updated 2023):  

Roberta Armaldi, John Harris (chair), Huan Huang, Leticia Cunqueiro Mendez, Jen-Chieh Peng,
Scott Pratt, Krishna Rajagopal, Claudia Ratti, Mikhail Stephanov, Julia Velkovska

Maria Chamizo‐Llatas, Bill Christie, Dmitri Denisov, Jamie Dunlop, Wolfram Fischer,
Achim Franz (secretary), Haiyan Gao, Hong Ma, Michiko Minty

STAR BUR Committee and Assignments:

Elke Aschenauer, Daniel Brandenburg, Daniel Cebra, Oleg Eyser, Carl Gagliardi, Jiangyong Jia, Rongrong Ma, Niseem Magdy, Alex Jentsch, Fuqiang Wang, Hanna Zbroszczyk
 
Ex-Officio -- Lijuan Ruan, Frank Geurts, ShinIchi Esumi, Sooraj Radhakrishnan, Barbara Trzeciak
Chair -- Qinghua Xu

STAR run25 BUR

  

BUR Documents  for Run25

Background Materials

Timeline

  • committee review: October 28, 2024 
  • draft document to Collaboration: October  30, 2024
  • final version to ALD: Novemer 3, 2024
  • NPP 2024 PAC Meeting: November 7-8, 2024


BNL Nuclear Physics PAC 2024 Charge and Agenda

The run scenarios for Run 25 (FY25) to consider:
• 20-week operation (instead of 22-week guidance because 2 weeks already
taken by Run 24)
• 28-week operation assuming we can extend the run. (I will talk to CAD &
EIC about possible a further operation and its consequences).

Timetable for the BUR 2025/26
• End of Run 24 – October 21, 2024
• CAD provides guidance for performance by October 28, 2024
• Experiments finalize and submit the BUR to me/PAC by November 3, 204.
• PAC meets on November 7, 2024

2024 PAC Committee Members:  

STAR BUR Committee:

Frank Geurts, Jiangyong Jia, Lijuan Ruan, Qinghua Xu
internal reviewers: Helen Caines, Zhangbu Xu
 

STAR Technical Support Group

On this site you'll be able to:

 

 

Electronics Lab private network

 

Plan Overview:
 

In Room 1-232 (the "Electronics Lab"), our goal is to remove experimental test equipment and unsupported (or poorly supported) computer systems from the "public" network.   These are likely sources of Cyber Security concerns and may benefit from a less volatile network environment than the campus network.  To do this, we will create a private network in the Electronics Lab, with one maintained Linux node that will be dual-homed on the public network to act as a gateway to access the private network as needed.   With the introduction of one (or more) Linux box(es), eventually we hope to retire the old Sun workstation completely.

The address space we will use is 192.168.140.0/255.255.255.0 (256 IP addresses).  This is "registered" with ITD network operations as a STAR private network, so that if anything "escapes" from the private space into the campus network, they will know who to call.

Given the short list of anticipated devices (below), no name server is planned, nor other common network services such as DHCP (subject to change as needed). 

Devices Using This Network:

  1. The Linux gateway system -- presley.star.bnl.gov (Scientific Linux 4.5) using 192.168.140.1
  2. One Sun Ultra E450 -- "svtbmonitor" (Solaris 8) using 192.168.140.2 
  3. One serial console server using 192.168.140.3 
  4. Several (~4-5) rack-mounted MVME or similar devices at any time.  An initial set have been assigned 192.168.140.11-15.  A set of working sample boot parameters are included in a file attached to this page (see the links in the Attachments section below).
  5. Update August 4, 2009:  Six more processors are being added to the network.  The name trgfe6 through trgfe11 have been assigned 192.168.140.16-21 (ie. added to /etc/hosts on presley and svtbmonitor).
  6. Several Windows PC's, including laptops that may come and go (which will require manual configuration -- a small inconvenience that can perhaps be overcome with networking "profile" software that stores multiple configurations on the node -- to be investigated, such as http://www.eusing.com/ipswitch/free_ip_switcher.htm

Status:

A Linux box named presley.star.bnl.gov is configured on the public network, with a second NIC using 192.168.140.1 to act as a manual gateway node as needed.  It is in the south-west corner of the lab.  An account named daqlab has been created (contact Wayne or Danny for the password if appropriate)

 

For the effort to replace svtbmonitor,  the home directory of svtbmonitor's testlab account have been copied over to presley in /home/svtbmonitor/testlab. 

Danny and Phil identified a handful of files from svtbmonitor as important so far:

  • emc.tcl
  • smd_qa.tcl
  • tower_qaodt.tcl
  • grab (compiled C code to open a window and connect to a specified node on the serial console server)
  • grab.bag (used by grab to "resolve" common names into ports on the console server)

The original versions of these files are all in their original (relative) paths, and modified versions for presley and the current networking setup were created and put in /home/daqlab/.  The Tcl scripts were modified to account for the slightly different environment on presley and were demonstrated to work (at least the basics - still needs testing to confirm full functionality).  The "grab" executable had to be recompiled from source (simple enough - the required source code consists of "grab.c" and only required a single minor change for the new environment on presley).  So far so good there.

 

Two small "desktop" 10/100 Mbps switches have been connected to each other to serve as the "backbone":

  • one is on a shelf on the west wall
  • the other is on a shelf on the north wall

 

This is easy to expand with one or two 16-port switches if needed (well, it actually is needed as I write this...).  (In fact, these switches have been rather fluid in the first months, coming and going and being swapped for others, all at a slight inconvenience to those of us trying to work with them...)

[Feb. 19, 2009 update -- the physical layout of switches and cables has been switched around many times since this description and I don't know the current state.]

The old networking (not to mention the serial lines) is a mess of cables, old hubs and switches that I plan to ignore as long as possible, though most should be removed if it isn't nailed down.  [Feb. 19, 2009 - some clean up appears to have been done by the folks working in the lab.]

 

To Be Done, moving from svtbmonitor to presley (last updated, Feb 19, 2009):

Towards this end we've identified the following remaining tasks:

Configure backups of the home directories on presley as another safety net for the critical content from svtbmonitor that has been transferred over.

Jack Engelage and Hank Crawford have transferred a bunch of files from svtbmonitor to presley [are they done?].  A "trigger" group was created containing Jack, Chris and Hank.  They have /home/trigger to share amongst themselves.

The biggies: 

VxWorks compilers for Linux and booting the VME processors from presley.  [Feb 19, 2009 update:  Yury Gorbunov (and Jack Engelage?) have made successful boots of trgfe3 from presley via FTP, so it looks like svtbmonitor has a chance of being retired at some point.  Sample boot parameters are attached]

 

Ralph test area

This is my first test page.

Shutdown schedule

FY04

Project Start Date: Fri 1/2/04
Project Finish Date: Mon 10/25/04

STAR FY04 v3 Shutdown


ID
Name
DurationStart_DateFinish_DateResource_NamesPredecessorsSuccessors
1
STAR Detector Operations
211.35 daysMon 1/5/04Mon 10/25/04   
2
Physics Run FY04
96 daysMon 1/5/04Sat 5/15/04  5FF
3
Start Shutdown Activities
116.35 daysSat 5/15/04Mon 10/25/04   
4
Move Detector into AB
14.6 daysSat 5/15/04Thu 6/3/04   
5
Purge TPC Gas
1 daySat 5/15/04Sat 5/15/04 2FF6SS
6
Subsystem Testing
6 daysSat 5/15/04Fri 5/21/04 5SS7
7
Retract PMD Detector
0.1 daysMon 5/24/04Mon 5/24/04Mech.Tech.68
8
Remove East BBC Detector
0.25 daysMon 5/24/04Mon 5/24/04Mech.Tech.79
9
Retract East Pole Tip
0.5 daysMon 5/24/04Mon 5/24/04Mech.Tech.[2]810FF
10
Install Scaffold
0.25 daysMon 5/24/04Mon 5/24/04Carpenter[2]9FF12,11
11
Remove Shield Wall
2.5 daysMon 5/24/04Thu 5/27/04Riggers[3]1026,23,16SS,18,135
12
Remove West Pole Tip Utilities
0.25 daysMon 5/24/04Tue 5/25/04Mech.Tech.1013
13
Remove West BBC Detector
0.25 daysTue 5/25/04Tue 5/25/04Mech.Tech.1214
14
Retract West Pole Tip
0.5 daysTue 5/25/04Tue 5/25/04Mech.Tech.[2]1315FF
15
Install Scaffold
0.13 daysTue 5/25/04Tue 5/25/04Carpenter[2]14FF 
16
Disconnect Electrical Power
0.5 daysMon 5/24/04Tue 5/25/04Electrician[2]11SS18,17
17
Disconnect Magnet Power
0.5 daysTue 5/25/04Tue 5/25/04Elect.Tech.[2]1618
18
Remove Buss Bridge
0.25 daysThu 5/27/04Thu 5/27/04Mech.Tech.[2]16,17,1119,21
19
Disconnect Platform MCWS
0.5 daysThu 5/27/04Fri 5/28/04Mech.Tech.1820
20
Disconnect Subsystems Utilities
1 dayFri 5/28/04Mon 5/31/04Mech.Tech.1926
21
Remove Platform Bridge
0.25 daysThu 5/27/04Thu 5/27/04Mech.Tech.[2]1826,22
22
Remove South Platform Stairs
0.25 daysThu 5/27/04Fri 5/28/04Mech.Tech.[2]2126,24
23
Disconnect RHIC Vacuum Pipe
0.5 daysThu 5/27/04Thu 5/27/04Mech.Tech.11 
24
Remove Seismic Anchors
0.25 daysFri 5/28/04Fri 5/28/04Mech.Tech.[2]2225
25
Disconnect Magnet LCW
0.25 daysFri 5/28/04Fri 5/28/04Mech.Tech.[2]2426
26
Roll Detector into AB
1 dayMon 5/31/04Tue 6/1/04Mech.Tech.[2]11,21,22,25,2027
27
Blow-out Magnet Coils
0.5 daysTue 6/1/04Tue 6/1/04Mech.Tech.[2]2628
28
Connect South Platform Stairs
0.5 daysTue 6/1/04Wed 6/2/04Mech.Tech.[2]2729
29
Connect Platform Bridge
0.5 daysWed 6/2/04Wed 6/2/04Mech.Tech.[2]2831,30
30
Connect Seismic Anchors
0.5 daysWed 6/2/04Thu 6/3/04Mech.Tech.29 
31
Install Buss Bridge
0.5 daysWed 6/2/04Thu 6/3/04Mech.Tech.[2]2932
32
Connect Electrical Power
0.5 daysThu 6/3/04Thu 6/3/04Electrician[2]3133SS
33
Connect Subsystems Utilities
0.5 daysThu 6/3/04Thu 6/3/04Mech.Tech.32SS34SS
34
Connect Platform MCWS
0.5 daysThu 6/3/04Thu 6/3/04Mech.Tech.33SS35
35
Detector In AB
0 daysThu 6/3/04Thu 6/3/04 3437,117,121
36
Remove East & West FTPC
13.75 daysThu 6/3/04Wed 6/23/04   
37
Remove E/W IFC Air Manifolds
0.25 daysThu 6/3/04Thu 6/3/04Mech.Tech.3538
38
Remove East & West Scaffold
0.5 daysThu 6/3/04Fri 6/4/04Carpenter[2]3739
39
Install West Platform & Rails
1 dayFri 6/4/04Mon 6/7/04Carpenter[2]3840,41
40
Remove West FTPC
0.5 daysMon 6/7/04Mon 6/7/04Mech.Tech.3945
41
Rig Installation Frame into AB
0.5 daysMon 6/7/04Mon 6/7/04Riggers[3]3942SS
42
Install East Installation Frame
0.5 daysMon 6/7/04Mon 6/7/04Mech.Tech.[2]41SS43
43
Install East Rails
1 dayMon 6/7/04Tue 6/8/04Mech.Tech.[2]4244
44
Remove East FTPC
0.5 daysTue 6/8/04Wed 6/9/04Mech.Tech.[2]4345
45
East & West FTPC's Removed
0 daysWed 6/9/04Wed 6/9/04 40,4446,48
46
Maintenance on FTPC's
10 daysWed 6/9/04Wed 6/23/04Electronic Tech45 
47
Remove Cone
11.5 daysWed 6/9/04Thu 6/24/04   
48
Disconnect Utilities
4 daysWed 6/9/04Tue 6/15/04Mech.Tech.4549
49
Remove Cone from IFC
0.5 daysTue 6/15/04Tue 6/15/04Mech.Tech.4876,126
50
Remove East Installation Frame
1 dayTue 6/22/04Wed 6/23/04Mech.Tech.[2]76,12651FF
51
Remove Installation Frame from AB
0.5 daysWed 6/23/04Wed 6/23/04Riggers[3]50FF52
52
Remove Installation Frame West
1 dayWed 6/23/04Thu 6/24/04Carpenter[2]5153
53
Cone Removed
0 daysThu 6/24/04Thu 6/24/04 5257
54
Install East BEMC Modules
47.5 daysMon 6/14/04Wed 8/18/04   
55
(30) Modules Delivered
4 daysMon 6/14/04Fri 6/18/04Riggers[3]57FF-5 days56FF
56
(30) Modules Staged in AB
4 daysMon 6/14/04Fri 6/18/04Mech.Tech.[2]55FF 
57
Install East & West Scaffold
0.5 daysThu 6/24/04Fri 6/25/04Carpenter[2]5358,55FF-5 days
58
Remove East Side Cables & Tray
3 daysFri 6/25/04Wed 6/30/04Mech.Tech.[2]5759
59
Erect Installation Fixture S.E. Upper
0.5 daysWed 6/30/04Wed 6/30/04Mech.Tech.[2]5860
60
Remove S.E. TPC Support Arm
0.25 daysWed 6/30/04Thu 7/1/04Mech.Tech.[2]5961
61
Install Modules 106-103
4 daysThu 7/1/04Wed 7/7/04Mech.Tech.[2]6062
62
Erect Installation Fixture S.E. Lower
0.5 daysWed 7/7/04Wed 7/7/04Mech.Tech.[2]6163
63
Install Modules 102-89
14 daysWed 7/7/04Tue 7/27/04Mech.Tech.[2]6264
64
Install S.E. TPC Support Arm
0.5 daysTue 7/27/04Wed 7/28/04Mech.Tech.[2]6365
65
Erect Installation Fixture N.E. Lower
0.5 daysWed 7/28/04Wed 7/28/04Mech.Tech.[2]6466
66
Install Modules 77-88
12 daysWed 7/28/04Fri 8/13/04Mech.Tech.[2]6567
67
Remove Installation Fixture
0.25 daysFri 8/13/04Fri 8/13/04Mech.Tech.[2]6668
68
Install East Side Cables & Tray
3 daysFri 8/13/04Wed 8/18/04Mech.Tech.[2]6769
69
East BEMC Installation Complete
0 daysWed 8/18/04Wed 8/18/04 6882,72
70
Install BEMC PMT Boxes
35 daysWed 7/14/04Wed 9/1/04   
71
Install 208 VAC Power
15 daysWed 7/14/04Wed 8/4/04Electrician[2]72SF-10 days 
72
Install PMT Boxes
10 daysWed 8/18/04Wed 9/1/04Mech.Tech.[2]6973FF,71SF-10 days
73
Install Electronics
20 daysWed 8/4/04Wed 9/1/04Electronic Tech72FF74
74
PMT Box Installation Complete
0 daysWed 9/1/04Wed 9/1/04 73144
75
Install SSD
20.5 daysTue 6/15/04Wed 7/14/04   
76
Rig Cone to Cleanroom Roof
0.5 daysTue 6/15/04Wed 6/16/04Mech.Tech.4977,50
77
Install SSD on Cone
20 daysWed 6/16/04Wed 7/14/04Mech.Tech.7678SS,80
78
Install LV Crate & Cables
5 daysWed 6/16/04Wed 6/23/04Electronic Tech77SS79
79
Install RDO Boxes
5 daysWed 6/23/04Wed 6/30/04Electronic Tech78 
80
SSD Installed on Cone
0 daysWed 7/14/04Wed 7/14/04 7785
81
Install Cone
20.5 daysWed 8/18/04Thu 9/16/04   
82
Rig Installation Frame into AB
0.5 daysWed 8/18/04Thu 8/19/04Riggers[3]6983SS
83
Install East Installation Frame
0.5 daysWed 8/18/04Thu 8/19/04Mech.Tech.[2]82SS84
84
Install Tables & Rails
0.5 daysThu 8/19/04Thu 8/19/04Mech.Tech.8385
85
Rig Cone to Rails
0.5 daysThu 8/19/04Fri 8/20/04Mech.Tech.[2]84,8086
86
Survey SSD to SVT
2 daysFri 8/20/04Tue 8/24/04Surveyors[2]8587
87
Complete SSD Installation
2 daysTue 8/24/04Thu 8/26/04Mech.Tech.8688
88
Test Utility Connections
1 dayThu 8/26/04Fri 8/27/04Electronic Tech8789FF
89
Clean TPC IFC
1 dayThu 8/26/04Fri 8/27/04Mech.Tech.88FF90
90
Install Cone in Detector
0.5 daysFri 8/27/04Fri 8/27/04Mech.Tech.8991
91
Test IFC High Voltage
0.5 daysFri 8/27/04Mon 8/30/04Electronic Tech9092
92
Survey TPC to Magnet
2 daysMon 8/30/04Wed 9/1/04Surveyors[2]9193
93
Survey SVT to TPC
2 daysWed 9/1/04Fri 9/3/04Surveyors[2]9294
94
Install East Partitions
0.5 daysFri 9/3/04Fri 9/3/04Mech.Tech.93110
95
Connect Water Lines
0.5 daysMon 9/6/04Tue 9/7/04Mech.Tech.11096
96
Connect RDO Cables
1 dayTue 9/7/04Wed 9/8/04Electronic Tech9597
97
Remove East Rails from IFC
0.5 daysWed 9/8/04Wed 9/8/04Mech.Tech.[2]9698
98
Remove East Installation Frame
0.5 daysWed 9/8/04Thu 9/9/04Mech.Tech.[2]9799FF
99
Remove Installation Frame from AB
0.25 daysThu 9/9/04Thu 9/9/04Riggers[3]98FF100
100
East Side Complete
0 daysThu 9/9/04Thu 9/9/04 99107,101
101
Install West Partitions
0.5 daysThu 9/9/04Thu 9/9/04Mech.Tech.100112
102
Connect Water Lines
0.5 daysMon 9/13/04Mon 9/13/04Mech.Tech.114103
103
Connect RDO Cables
1 dayMon 9/13/04Tue 9/14/04Electronic Tech102104
104
Remove West Rails from IFC
0.5 daysTue 9/14/04Wed 9/15/04Mech.Tech.103105
105
Remove Installation Frame West
1 dayWed 9/15/04Thu 9/16/04Carpenter[2]104106
106
West Side Complete
0 daysThu 9/16/04Thu 9/16/04 105107
107
Cone Installation Complete
0 daysThu 9/16/04Thu 9/16/04 100,106144
108
Install FTPC East & West in AB
12.5 daysWed 8/25/04Mon 9/13/04   
109
Run & Test N. Platform
5 daysWed 8/25/04Wed 9/1/04Electronic Tech110SF-2 days 
110
Install East FTPC in Detector
1 dayFri 9/3/04Mon 9/6/04Mech.Tech.[2]94115,109SF-2 days,95
111
Run & Test N. Platform
5 daysWed 9/1/04Wed 9/8/04Electronic Tech114SF-2 days 
112
Remove West Scaffold
0.25 daysThu 9/9/04Fri 9/10/04Carpenter[2]101113
113
Install West Platform & Rails
0.25 daysFri 9/10/04Fri 9/10/04Carpenter[2]112114
114
Install West FTPC in Detector
1 dayFri 9/10/04Mon 9/13/04Mech.Tech.[2]113115,111SF-2 days,102
115
FTPC Installed in AB
0 daysMon 9/13/04Mon 9/13/04 114,110 
116
PMD Maintenance
40 daysThu 6/3/04Thu 7/29/04   
117
Subsystem Maintenance
30 daysThu 6/3/04Thu 7/15/04 35118
118
Detector Commisioning
10 daysThu 7/15/04Thu 7/29/04 117119
119
PMD Maintenance Complete
0 daysThu 7/29/04Thu 7/29/04 118 
120
EEMC Maintenance
90 daysThu 6/3/04Thu 10/7/04   
121
Maintenance
50 daysThu 6/3/04Thu 8/12/04 35122SS
122
MAPMT Installation
80 daysThu 6/3/04Thu 9/23/04Mech.Tech.[2]121SS123
123
Subsystem Testing
10 daysThu 9/23/04Thu 10/7/04 122124
124
EEMC Maintenance Complete
0 daysThu 10/7/04Thu 10/7/04 123 
125
TPC Maintenance
50 daysTue 6/15/04Tue 8/24/04   
126
Repair IFC Strip Short
5 daysTue 6/15/04Tue 6/22/04Mech.Tech.49127,50,131SS
127
Modify Resistor Chain
5 daysTue 6/22/04Tue 6/29/04Mech.Tech.126128
128
FEE & RDO Maintenance
40 daysTue 6/29/04Tue 8/24/04Electronic Tech127129
129
TPC Maintenance Complete
0 daysTue 8/24/04Tue 8/24/04 128 
130
TOF Maintenance
15 daysTue 6/15/04Tue 7/6/04   
131
Tray Maintenance
10 daysTue 6/15/04Tue 6/29/04 126SS132
132
Detector Testing
5 daysTue 6/29/04Tue 7/6/04 131133
133
TOFp Maintenance Complete
0 daysTue 7/6/04Tue 7/6/04 132 
134
FPD Installation
20 daysThu 5/27/04Thu 6/24/04   
135
Install West Detectors
10 daysThu 5/27/04Thu 6/10/04Mech.Tech.11136
136
Install N. E. Stand for 20-cm Travel
5 daysThu 6/10/04Thu 6/17/04Mech.Tech.[2]135137
137
Detector Commisioning
5 daysThu 6/17/04Thu 6/24/04 136138
138
FPD Installation Complete
0 daysThu 6/24/04Thu 6/24/04 137140
139
BBC Maintenance
15 daysThu 6/24/04Thu 7/15/04   
140
Subsystem Maintenance
10 daysThu 6/24/04Thu 7/8/04 138141
141
Detector Commisioning
5 daysThu 7/8/04Thu 7/15/04 140142
142
BBC Maintenance Complete
0 daysThu 7/15/04Thu 7/15/04 141 
143
Move Detector into WAH
27 daysThu 9/16/04Mon 10/25/04   
144
Disconnect Electrical Power
0.5 daysThu 9/16/04Thu 9/16/04Electrician[2]107,74147,145SS,146SS
145
Disconnect Platform MCWS
0.5 daysThu 9/16/04Thu 9/16/04Mech.Tech.144SS 
146
Disconnect TPC Utilities
0.5 daysThu 9/16/04Thu 9/16/04Mech.Tech.144SS 
147
Remove Buss Bridge
0.5 daysThu 9/16/04Fri 9/17/04Mech.Tech.[2]144148
148
Remove Platform Bridge
0.5 daysFri 9/17/04Fri 9/17/04Mech.Tech.[2]147149
149
Remove South Platform Stairs
1 dayFri 9/17/04Mon 9/20/04Mech.Tech.[2]148150
150
Disconnect Seismic Anchors
0.5 daysMon 9/20/04Tue 9/21/04Mech.Tech.[2]149151SS
151
Lower Detector to Rails
0.5 daysMon 9/20/04Tue 9/21/04Mech.Tech.[2]150SS152
152
Roll Detector into WAH
1.5 daysTue 9/21/04Wed 9/22/04Mech.Tech.[2]151153,154
153
Connect Magnet LCW
1 dayWed 9/22/04Thu 9/23/04Mech.Tech.[2]152 
154
Raise/Level Detector @ IR
0.5 daysWed 9/22/04Thu 9/23/04Mech.Tech.[2]152155FF,163
155
Connect Seismic Anchors
0.5 daysWed 9/22/04Thu 9/23/04Mech.Tech.[2]154FF159,156
156
Install Buss Bridge
0.5 daysThu 9/23/04Thu 9/23/04Mech.Tech.[2]155157,160
157
Connect Electrical Power
1 dayThu 9/23/04Fri 9/24/04Electrician[2]156158
158
Connect Magnet Power
1 dayFri 9/24/04Mon 9/27/04Elect.Tech.[2]157 
159
Connect All Subsystem Utilities
2 daysThu 9/23/04Mon 9/27/04Mech.Tech.[2]155168,166
160
Connect Platform MCWS
0.5 daysThu 9/23/04Fri 9/24/04Mech.Tech.156168,161
161
Connect South Platform Stairs
1 dayFri 9/24/04Mon 9/27/04Mech.Tech.[2]160162
162
Connect Platform Bridge
0.5 daysMon 9/27/04Mon 9/27/04Mech.Tech.[2]161167
163
Connect Vacuum Pipe & Supports
0.5 daysThu 9/23/04Thu 9/23/04Mech.Tech.[2]154164
164
Vacuum Pipe Bake-out
5 daysThu 9/23/04Thu 9/30/04Mech.Tech.[2]163165
165
Survey Beam Pipe & Magnet
4 daysThu 9/30/04Wed 10/6/04Surveyors[2]164 
166
Detector Safety Certification
2 daysMon 9/27/04Wed 9/29/04 159 
167
Install Shield Wall
3 daysMon 9/27/04Thu 9/30/04Riggers[3]162171
168
Subsystems testing in WAH
20 daysMon 9/27/04Mon 10/25/04 160,159169SS+5 days
169
Install Pole Tips
2 daysMon 10/4/04Wed 10/6/04Mech.Tech.[2]168SS+5 days170
170
Install BBC Detectors
1 dayWed 10/6/04Thu 10/7/04Mech.Tech.[2]169171
171
Shutdown Activities Complete
0 daysThu 10/7/04Thu 10/7/04 167,170 

FY06

STAR FY06 Shutdown Schedule


Project Start Date: 6/26/06
Project Finish Date: 10/27/06

 

STAR FY06 v5 Shutdown

IDWBS
Name
DurationStart_DateFinish_DateResource_NamesPredecessorsSuccessors
11
STAR Detector FY06 Shutdown
87 days6/26/0610/27/06   
21.1
Disconnect Detector Utilities
10.75 days6/26/067/12/06   
31.1.1
Begin Shutdown Activities
0 days6/26/066/26/06  4SS,114SS
41.1.2
Purge TPC Gas
1 day6/26/066/26/06 3SS 
51.1.3
Remove West BBC Detector
0.25 days7/5/067/5/06Mech.Tech.1146,103
61.1.4
Disconnect West Pole Tip Utilities
0.5 days7/5/067/5/06Mech.Tech.[2]57
71.1.5
Retract West Pole Tip
0.5 days7/5/067/6/06Mech.Tech.[2]68FF
81.1.6
Install Scaffold
0.25 days7/6/067/6/06Carpenter[2]7FF9
91.1.7
Remove East BBC Detector
0.25 days7/6/067/6/06Mech.Tech.810,103
101.1.8
Disconnect East Pole Tip Utilities
0.25 days7/6/067/6/06Elect.Tech.[2]911
111.1.9
Retract East Pole Tip
0.5 days7/6/067/7/06Mech.Tech.[2]1012FF,15,13
121.1.10
Install Scaffold
0.25 days7/7/067/7/06Carpenter[2]11FF19
131.1.11
Disconnect Electrical Power
0.5 days7/7/067/7/06Electrician[2]1115,14,17
141.1.12
Disconnect Magnet Power
0.5 days7/7/067/10/06Elect.Tech.[2]1315
151.1.13
Remove Buss Bridge
0.25 days7/10/067/10/06Mech.Tech.[2]13,14,1116
161.1.14
Disconnect Platform MCWS
0.5 days7/10/067/10/06Mech.Tech.[2]1518
171.1.15
Disconnect Subsystems Utilities
2 days7/7/067/11/06Mech.Tech.13 
181.1.16
Remove South Platform Stairs
0.25 days7/11/067/11/06Mech.Tech.[2]1621
191.1.17
Remove pVPD Detectors
0.25 days7/7/067/7/06Mech.Tech.[2]1220
201.1.18
Disconnect RHIC Vacuum Pipe
3 days7/7/067/12/06Mech.Tech.[2]1924
211.1.19
Remove Seismic Anchors
0.25 days7/11/067/11/06Mech.Tech.[2]1822
221.1.20
Disconnect Magnet Water
0.25 days7/11/067/11/06Mech.Tech.[2]2123
231.1.21
Blow-out Magnet Coils
1 day7/11/067/12/06Mech.Tech.[2]2224
241.1.22
Detector Utilities Disconnected
0 days7/12/067/12/06 23,2026,57
251.2
Hydraulic System Testing
6.5 days7/12/067/21/06   
261.2.1
Run & Test Hydraulics
5 days7/12/067/19/06Mech.Tech.2427FF
271.2.2
Position Detector on Beamline
0.5 days7/19/067/19/06Mech.Tech.26FF28SS
281.2.3
Survey Detector on Beamline
2 days7/19/067/21/06Surveyors[2]27SS29
291.2.4
Hydraulics Tests Complete
0 days7/21/067/21/06 2831,67,132,85
301.3
Remove Hilman Rollers
5 days7/31/068/4/06   
311.3.1
Remove Roller Assemblies
5 days7/31/068/4/06Mech.Tech.[2]29,12332FF
321.3.2
Remove Roller Assemblies
2 days8/3/068/4/06Riggers[3]31FF33FF
331.3.3
Crate & Ship Roller Assemblies
1 day8/4/068/4/06Riggers[2],Carpenter[2]32FF34FF
341.3.4
Roller Assembly Removal Complete
0 days8/4/068/4/06 33FF 
351.4
Remove West FPD++ Detector
23.5 days7/5/068/7/06   
361.4.1
Disconnect Cabling
5 days7/5/067/11/06Mech.Tech.11437SS
371.4.2
Remove Existing Cells
13 days7/5/067/21/06Mech.Tech.36SS38
381.4.3
Remove Cabling
5 days7/31/068/4/06Mech.Tech.37,12339
391.4.4
Remove Enclosures & Shield Block
0.5 days8/7/068/7/06Riggers[3]38125,40
401.4.5
FPD++ Removal Complete
0 days8/7/068/7/06 3995
411.5
East FTPC Maintenance
15 days8/7/068/25/06   
421.5.1
Remove East IFC Air Manifolds
0.5 days8/7/068/7/06Mech.Tech.67,12343
431.5.2
Remove East Scaffold
0.5 days8/7/068/7/06Carpenter[2]4244
441.5.3
Rig East FTPC Platform
0.5 days8/8/068/8/06Riggers[3]4345SS
451.5.4
Install East FTPC Platform
0.5 days8/8/068/8/06Carpenter[2]44SS46
461.5.5
Install East FTPC Rails
0.5 days8/8/068/8/06Mech.Tech.[2]4547
471.5.6
Remove East FTPC
1 day8/9/068/9/06Mech.Tech.[2]4648,74
481.5.7
Replace East FTPC FEE
5 days8/10/068/16/06Electronic Tech4749
491.5.8
Install FTPC on Rails & Test
5 days8/17/068/23/06 4850FF
501.5.9
Install East FTPC
0.5 days8/24/068/24/06Mech.Tech.[2]49FF,7551
511.5.10
Remove East FTPC Rails
0.5 days8/24/068/24/06Mech.Tech.[2]5052
521.5.11
Rig East FTPC Platform
0.5 days8/25/068/25/06Riggers[3]5153SS
531.5.12
Remove East FTPC Platform
0.5 days8/25/068/25/06Carpenter[2]52SS54
541.5.13
Install East Scaffold
0.5 days8/25/068/25/06Carpenter[2]5355
551.5.14
FTPC Maintenance Complete
0 days8/25/068/25/06 5486,70
561.6
SSD Maintenance
27.25 days7/12/068/18/06   
571.6.1
Compressor Maintenance
5 days7/12/067/19/06 24 
581.6.2
RDO maintenance
5 days7/31/068/4/06Carpenter[2]67SS59
591.6.3
Crate Maintenance
10 days8/7/068/18/06Riggers[3]58,12360
601.6.4
SSD Maintenance Complete
0 days8/18/068/18/06 59 
611.7
BEMC Maintenance
40 days7/31/069/25/06   
621.7.1
Remove & Repair Crates
20 days7/31/068/25/06Electronic Tech12363
631.7.2
Install PMT Box Remote Power
20 days8/28/069/25/06Electronic Tech[2]6264FF
641.7.3
PMT Box Maintenance
20 days8/28/069/25/06Elect.Tech.63FF65
651.7.4
BEMC Maintenance Complete
0 days9/25/069/25/06 64145
661.8
SVT RDO Box Relocation
39 days7/31/069/22/06   
671.8.1
Remove RDO Boxes
5 days7/31/068/4/06Electronic Tech29,12368,42,58SS
681.8.2
Repair RDO Boxes
20 days8/7/069/1/06Electronic Tech6769
691.8.3
Install West RDO Boxes
2 days9/12/069/13/06Electronic Tech68,8772,81,70
701.8.4
Install Remote East RDO Boxes
5 days9/14/069/20/06Mech.Tech.69,5571
711.8.5
Rout East RDO Cables
2 days9/21/069/22/06Electronic Tech[2]7072
721.8.6
SVT RDO Box Relocation Complete
0 days9/22/069/22/06 71,69139,145
731.9
HFT Prototype Installation
10 days8/10/068/23/06   
741.9.1
Install East HFT Prototype
10 days8/10/068/23/06Mech.Tech.[2]4775
751.9.2
HFT Prototype Installed
0 days8/23/068/23/06 7450
761.10
PMD Maintenance
40 days7/31/069/25/06   
771.10.1
Replace Supermodules
30 days7/31/069/11/06 12378
781.10.2
Detector Maintenance
10 days9/12/069/25/06 7779
791.10.3
PMD Maintenance Complete
0 days9/25/069/25/06 78145
801.11
EEMC Maintenance
15 days9/14/0610/4/06   
811.11.1
PMT/MAPMT Box Maintenance
12 days9/14/069/29/06Mech.Tech.[2]69,11282SS
821.11.2
Subsystem Testing
15 days9/14/0610/4/06 81SS83
831.11.3
EEMC Maintenance Complete
0 days10/4/0610/4/06 82145,115
841.12
TPC Maintenance
35.75 days7/21/069/11/06   
851.12.1
Remove East/West Bad FEE/RDO
5 days7/21/067/28/06Electronic Tech[2]2988
861.12.2
Install East FEE/RDO
5 days8/28/069/1/06Electronic Tech[2]5589,87
871.12.3
Install West FEE/RDO
5 days9/5/069/11/06Electronic Tech[2]8689,69
881.12.4
Maintenance on Gas Room UPS
1 day7/28/067/31/06 8589
891.12.5
TPC Maintenance Complete
0 days9/11/069/11/06 88,87,86139,145,107
901.13
TOF Maintenance
5 days10/5/0610/11/06   
911.13.1
Remove Old Tray
2 days10/5/0610/6/06Mech.Tech. 92
921.13.2
Install New Trays
3 days10/9/0610/11/06Mech.Tech.9193
931.13.3
TOF Installation Complete
0 days10/11/0610/11/06 92115
941.14
FMS Installation
43 days8/7/0610/6/06   
951.14.1
Install FMS Carriages
5 days8/7/068/14/06Mech.Tech.[2]4096
961.14.2
Install FMS Cells
30 days8/14/069/26/06Mech.Tech.[2]9598FF,97SS
971.14.3
Install AC Power
5 days8/14/068/21/06Electrician[2]96SS100
981.14.4
Install FMS Enclosures
5 days9/19/069/26/06Mech.Tech.[2]96FF99
991.14.5
Install FMS Cabling
8 days9/26/0610/6/06Mech.Tech.[2]98101
1001.14.6
Install Electronics Racks
10 days8/21/069/5/06Electronic Tech[2]97101
1011.14.7
FMS Installation Complete
0 days10/6/0610/6/06 99,100145
1021.15
BBC Maintenance
25 days7/6/068/10/06   
1031.15.1
Subsystem Maintenance
20 days7/6/068/3/06 5,9104
1041.15.2
Detector Commisioning
5 days8/3/068/10/06 103105
1051.15.3
BBC Maintenance Complete
0 days8/10/068/10/06 104 
1061.16
Trigger Maintenance
10 days9/12/069/25/06   
1071.16.1
Trigger Maintenance
10 days9/12/069/25/06 89108
1081.16.2
Trigger Maintenance Complete
0 days9/25/069/25/06 107145
1091.17
Pole Tip Hose Replacement
5 days7/31/068/4/06   
1101.17.1
Remove Old Hose
2 days7/31/068/1/06Mech.Tech.[2]123111
1111.17.2
Install New Braided Hose
3 days8/2/068/4/06Mech.Tech.[2]110112
1121.17.3
Hose Replacement Complete
0 days8/4/068/4/06 11181
1131.18
Magnet Power Supply Maintenance & Test
87 days6/26/0610/27/06   
1141.18.1
Power Supply Maintenance & Test
5 days6/26/066/30/06Electronic Tech[2]3SS5,36
1151.18.2
Remove East/West Scaffold
0.5 days10/12/0610/12/06Carpenter[2]83,93116SS
1161.18.3
Install East Pole Tip
1 day10/12/0610/12/06Mech.Tech.[2]115SS117
1171.18.4
Install West Pole Tip
1 day10/13/0610/13/06Mech.Tech.[2]116118,143FF
1181.18.5
Power Supply Maintenance & Test
10 days10/16/0610/27/06Electronic Tech[2]117119
1191.18.6
Power Supply Maintenance Complete
0 days10/27/0610/27/06 118146
1201.19
C-AD Maintenance
49 days7/24/069/29/06   
1211.19.1
SGIS Modifications
10 days9/18/069/29/06Elect.Tech.[2] 130,144
1221.19.2
PS Sound Barrier Wall
10 days7/31/068/11/06 123 
1231.19.3
AC Substation Maintenance
5 days7/24/067/28/06  62,110,124,128,77,42,59,31,67,38,132,122
1241.19.4
Control Room Modification
10 days7/31/068/11/06 123126
1251.19.5
Install West Tunnel Shielding
5 days8/7/068/14/06Riggers[3]39 
1261.19.6
DAQ Room A/C Upgrade
10 days8/14/068/25/06 124127
1271.19.7
Fire Sprinkler Preaction Upgrade
10 days8/28/069/11/06 126130
1281.19.8
MCW Flow Balance
10 days7/31/068/11/06Mech.Tech.[2]123129
1291.19.9
Seal Tower Basin
5 days8/14/068/18/06Mech.Tech.[2]128 
1301.19.10
C-A Maintenance Complete
0 days9/29/069/29/06 127,121 
1311.20
Connect Detector Utilities
64 days7/31/0610/27/06   
1321.20.1
Connect Magnet LCW
1 day7/31/067/31/06Mech.Tech.[2]29,123133
1331.20.2
Install Buss Bridge
0.5 days8/1/068/1/06Mech.Tech.[2]132134,137
1341.20.3
Connect Electrical Power
1 day8/1/068/2/06Electrician[2]133135,136
1351.20.4
Connect Magnet Power
1 day8/2/068/3/06Elect.Tech.[2]134 
1361.20.5
Connect All Subsystem Utilities
1 day8/2/068/3/06Mech.Tech.[2]134 
1371.20.6
Connect Platform MCWS
0.5 days8/1/068/1/06Mech.Tech.133138
1381.20.7
Connect South Platform Stairs
1 day8/2/068/2/06Mech.Tech.[2]137 
1391.20.8
Connect Vacuum Pipe & Supports
3 days9/25/069/27/06Mech.Tech.[2]72,89140SS
1401.20.9
Vacuum Pipe Bake-out
10 days9/25/0610/6/06Mech.Tech.[2]139SS141
1411.20.10
Install START Detectors
0.5 days10/9/0610/9/06Mech.Tech.140142
1421.20.11
Survey Beam Pipe
2 days10/9/0610/11/06Surveyors[2]141143
1431.20.12
Install BBC Detectors
1 day10/13/0610/13/06Mech.Tech.[2]142,117FF 
1441.20.13
Detector Safety Certification
5 days10/2/0610/6/06 121145
1451.20.14
Subsystems testing in WAH
5 days10/9/0610/13/06 144,65,108,101,72,79,83,89146
1461.20.15
Shutdown Activities Complete
0 days10/27/0610/27/06 145,119 

 

 

Talk's commitee related documents

 

STAR Talks Committee 

 


Maté Csanad, Jim Drachenburg (chair), Evan Finch, Debasish Mallick, Matt Posik, Maria Stefaniak, Zebo Tang



Private committee mailing List 
Private committee mailing old/archived list 
STAR presentations

Talk Statistics Interface

Talk Statistics Comparison Interface


Upcoming Conferences Link
Nominated Speaker List 19-24

Talks Committee Criteria and Protocols

STAR Talks Committee Overview and Advice to Speakers

(Last updated August 2012)

 

Purpose: The STAR Talks Committee (STC) recommends speakers to the Spokesperson from among collaborators in good standing who have been nominated to speak at a conference by another STAR collaborator.

The STAR talks committee endeavors to follow the procedures outlined in the "STAR Policies for the Publication and Presentation of STAR Results" when recommending speakers to the Spokesperson.

http://drupal.star.bnl.gov/STAR/starnotes/public/csn0451

Criteria that are considered when recommending speakers include:

  1. Match between talk topic and speaker's areas of expertise
  2. Abilities as a speaker
  3. Career status (Jr.or Sr., especially current employment and career transitions)
  4. History of previous talks for STAR
  5. Community service and physics analysis contributions to STAR
  6. Institutional and geographic balance
  7. Special criteria other than the above, if requested by conference organizers

The Talks Committee uses its best judgment in each case when deciding the relative importance of these criteria. After selecting a speaker, the STAR Talks Committee recommendations will be posted to the Star-Talks mailing list as soon as the speaker accepts the invitation.

Nominations may come from any member of the collaboration including self nominations and nominations from members of the STAR Talks Committee. The STAR Talks Committee will consider the number of previous STAR Talks given by a candidate when making its recommendations. Other selection criteria are listed in Table I. Colloquia/seminars at universities or labs, and contributed 10-minute APS talks are not considered relevant. Previous talks where the nominee was invited directly by the conference organizers, or which may have included only publicly accessible STAR data, or had a broader scope than just STAR results, are considered somewhat relevant by the Talks Committee, although less so than talks given as a designated representative of the STAR collaboration.

A STAR collaborator who receives a speaking invitation directly from a conference organizer is expected to inform the spokesperson and the chair of the Talks Committee if they plan to accept the invitation. If the collaborator does not intend to accept the invitation, it is recommended to refer the invitation to the Talks Committee to see if the invitation can be shared with another STAR collaborator. If a conference organizer solicits the advice of a collaborator regarding a STAR speaker, the collaborator is expected to refer the matter to the Talks Committee.

Speakers representing the STAR collaboration have specific obligations that are clearly stated in STAR Note 0451 and on the web page entitled "Policies for the Publication and Presentation of STAR Results" (noted above). Note especially bullets # 6 to 11.

The most important of these obligations include discussing the content of the talk with the working groups in a timely fashion (see CSN0451), rehearsing the talk in front of several STAR collaborators including the speaker’s council member, and posting the abstract and talk to the STAR-Talks mailing list according to the deadlines established in CSN0451.

Upon returning from the conference, speakers are required to post their talks on the “List of Presentations” web page. Input to the list can be made via the link on the STAR homepage (http://www.star.bnl.gov/input).

Nominated Speakers

Nominated Speakers 2014-2015

         
Conference Date Abstracts Due Speakers(s) (talk uploaded/not uploaded/not uploaded but direct invite) Web Page
Baldin Seminar - 2014 (Dubna)  September 16-20,2014  08-01-2014 (1)  http://relnp.jinr.ru/ishepp/
Spin - 2014  October 20-24, 2014  05-01-2014 (1) Zillong Chang  http://www.phy.pku.edu.cn/spin2014/
RHIC/AGS Users Meeting - 2014  June 17-20, 2014  05-01-2014 (3) Flemming Videbaek, Zhenyu Ye, Alex Schmah  http://www.bnl.gov/aum2014/
RHIC/AGS BES Workshop
    (1)Evan Sangaline  
RHIC/AGS Bulk Properties Workshop
    (2)Aihong Tang, Hui Wang  
         RHIC/AGS Quarkonia Workshop     (2)Anthony Kesich,Ota Kukral  
RHIC/AGS Nucleon Structure Workshop
    (4)Brian Page, Yuxi Pan,Oleg Eyser,Jinlong Zhang  
HEPFT-2014 (Protvino)  June 23-27, 2014  05-15-2014 (1)Vitaly Okorokov  https://indico.cern.ch/event/269671/
ICHEP-2014  July 2-7, 2014  04-23-2014 (1)Robert Vertesi  http://ichep2014.es/
RIKEN-BNL Thermal Photons+Dileptons  August 20-22, 2014  asap (3)Xin Dong, Joseph Butterworth, Bingchu Huang  http://www.bnl.gov/tpd2014/
High-pT-LHC - 2014  September 9-12, 2014  08-18-2014 (3-4)Daniel Kikola, Sonja Kabana (direct)  https://indico.cern.ch/event/289665/
ICNFP- 2014 (Crete)  July 28-August 6, 2014  ? (5) Barbara Trzeciak, Shi Shusu, Adam Gibson, Daniel McDonald  http://indico.cern.ch/event/277650/
Diffraction - 2014  September 10-16, 2014  08-08-2014 (1,2,1)Mariusz Przybycien, Salvatore Fazio(direct invite), Chanak De Silva, Nikola Poljak  https://agenda.infn.it/conferenceDisplay.py?confId=7520
DNP - 2014 (Hawaii) October 7-11, 2014 asap (1) Jinhui Chen  http://web.mit.edu/lns/hawaii14/
ATHIC-2014  August 5-8, 2014    Kefeng Xin (direct invite)  http://www-nuclth.phys.sci.osaka-u.ac.jp/ATHIC2014/
Collectivity in Relativistic Heavy Ion Collisions September 14-20, 2014   Mike Lisa (direct invite) https://indico.cern.ch/event/284199/
VERTEX-2014 15-19 September 2014 07-15-14 (1) Michal Szelezniak http://www.amca.cz/VERTEX2014/
VPCF-2014  August 25-19, 2014  asap  (1) Mike Lisa, Yadav Pandit (direct invite)  https://indico.cern.ch/event/300974/
Resonance Workshop at Catania November 3-7, 2014   Richard Witt (direct invite), Lokesh Kumar (direct invite) https://agenda.ct.infn.it/conferenceDisplay.py?confId=1040
CPOD-2014 November 17-21, 2014 asap (1)Xiaofeng Luo, Xianglei Zhu (direct invite) http://www2.physik.uni-bielefeld.de/cpod2014.html
IS-2014 December 3-7, 2014 asap (1)Helen Caines http://is2014.lbl.gov 
WWND-2015  January 25-31,2015  12-01-14 (5+) Manuel Claderon, Yadav Pandit, Kurt Jiang, Flemming Videbaek https://indico.cern.ch/event/336180/
ICPAQGP-2015  February 2-6, 2015  10-24-14  (2)Bedanga Mohanty, Jan Rusnak,(1)Zhangbu Xu(direct invite)  http://www.vecc.gov.in/ICPAQGP2015
 Heavy Flavor Production Workshop  January 8-10,2015    Xin Dong (direct invite)
 
Zimanyi school December 1.- 5, 2014   Robert Vertesi (direct invite) http://zimanyischool.kfki.hu/14/
QCD workshop on Chirality  January 21-23, 2015  12-15-14  (3)Hongwei Ke, Gang Wang, Liwen Wen  
CHARM 2015 May 17-23, 2015   multiple, undifined -- no plenary talks http://charm2015.wayne.edu/
Moriond 2015 March 21-28, 2015 3-1-15 (2)Matt Lamont, Bingchu Huang http://moriond.in2p3.fr/QCD/2015/
GHP 2015 April 8-10, 2015   Frank Guerts (direct invite) http://www.aps.org/units/ghp/meetings/meeting.cfm?name=GHP15
DIS-2015 April 27-May 1, 2015 3-1-5 Salvatore Fazio (recommendation) http://www.physics.smu.edu/web/dis2015/
VERTEX - 2015 June 1- 5, 2015 4-1-2015 (1) Gene Van Buren http://physics.unm.edu/VERTEX2015/
RHIC/AGS Users Meeting June 9-12, 2015 asap (3 plenary, 1 - HF, 1 - dileptons) Oleg Eyser, Ernst Sichtermann, Hongwei Ke,David Tlusty,Yi Guo, Takafumi Niida (direct invite) http://www.rhicuec.org/users_meeting.aspx
Hard Probes - 2015 June 29 - July 3, 2015 4-1-2015 (1) Li Yi http://www.physics.mcgill.ca/hp2015/
Lomonosov - 2015 August 20 - 26, 2015 4-1-2015 (1) Sergey Voloshin http://www.icas.ru/english/index.htm
HYP-2015 September 7 - 12 , 2015 asap (1) Neha Shah http://lambda.phys.tohoku.ac.jp/hyp2015/
PHOTON - 2015  June 15-19, 2015  5-1-2015 (1) none  http://indico.inp.nsk.su/event/1/
SQM-2015 July 6 -11, 2015 4-30-2015 (3) Wei Xie, Xiaoping Zhang, Jinhui Chen http://sqm.jinr.ru/
EMMI-2015 July 20-22, 2015   Aihong Tang (direct invite) https://indico.gsi.de/conferenceDisplay.py?confId=3767
HIC for FAIR-2015 July 29-31,2015   Jochen Thaeder (direct invite) https://indico.cern.ch/event/399419/overview
HESZ2015 Sept. 9-12,2015   (1) none https://indico.cern.ch/event/407524/
QM-2015  Sept. 27-Oct.4, 2015   (1 pleanary) Mustafa Mustafa; Merged talks: Vipul Bairathi, Michael Lomnitz,Xiaozhi Bai,Barbara Trzeciak, Guannan Xie  http://qm2015.riken.jp/

2015 RHIC/AGS Open Forum

 Oct. 29, 2015    Zhenyu Ye (direct invite)  http://rhicusers.blogspot.com/2015/10/2015-rhicags-open-forum-meeting-at-aps.html
Done: #99ccff
Direct: #99ffff
Failed:#ff0000

Nominated Speakers 2016-2018

Conference  Date  Abstracts due  Speakers(s) (talk uploaded/not uploaded/not uploaded but direct invite)  Webpage
RIKEN-BNL workshop
"Opportunities for Exploring Longitudinal Dynamics in Heavy Ion Collisions at RHIC
Jan 20-22, 2016   1 talk: Fleming Videbaek  Note: talk given by Zhangbu Xu (conflict with other meeting)
WWND2016 Feb. 28 - Mar. 5, 2016   7 talks: Bingchu Huang, Ji Xu, Liang He, Kathryn Meehan, Long Ma, Kolja Kauder, Isaac Upsal https://indico.cern.ch/event/435555/
Note: Bingchu did not go - visa
QCD Moriond 2016 Mar 19-26, 2016   2 talks: Brian Page, Manuel Calderon de la Barca Sanchez https://cds.cern.ch/record/2069145?ln
ICHEP 2016 Aug 3-10, 2016 February 7, 2016 3 talks:Hao Qiu, Nihar Sahoo, Yadav Pandit http://www.ichep2016.org/
PIXEL 2016 Sep 5-9,
2016
 
  1 talk:  Giacomo Contin http://pixel2016.ge.infn.it
BARYONS 2016 May 16-20, 2016   1 talk: no speaker found http://baryons2016.physics.fsu.edu/
QCHS12 Aug 29 - Sep 3, 2016   1 talk: https://indico.cern.ch/event/353906/overview 
CPOD2016 May 30-June 4, 2016   1 talk: Z. Xu http://ift.uni.wroc.pl/~cpod2016/
3rd Elba workshop on Forward Physics @ LHC energy  May 30-June 1, 2016   direct invite: Kin Yip https://indico.cern.ch/event/505807/
RHIC/AGS Users Meeting June 9-10, 2016   3 talks: David Tlusty, Rosi Reed, Rongrong Ma  
RHIC/AGS Users Meeting, Spin session  June 7, 2016   1 talk: NOTE: original talk on STAR forward upgrades cancelled by the organizers after the agreement with SPIN conveners due to a large overlap with PHENIX. We tried to get another one on polarized pA program instead: the organizers refused it :-(
Quarkonia Working Group workshop June
6-10,
2016
  3 talks: Zhenyu Ye, Takahito Todoroki,
Siwei Luo
http://events.pnnl.gov/default.aspx?topic=QWG2016
 YITP workshop on Meson in Nucleus  Jul 31-Aug 2, 2016   1 talk: Yi Guo http://menu2016.riken.jp
SQM 2016  June 27 - July 1, 2016   3 talks: Zhenyu Ye, Shusu Shi, Alexander Schmah http://sqm2016.lbl.gov
QCDN 2016 Jul 11-15, 2016   2 talks: Grant Webb, Stephen Trentalange http://tp.lc.ehu.es/QCD-N2016/
BEACH 2016 June 12-18, 2016 April 15, 2016 1 talk: Zhengqiao Zhang http://beach2016.gmu.edu
VERTEX 2016 Sep 26-30, 2016   1 talk: Leo Greiner http://indico.cern.ch/e/vertex2016
New Trends in High Energy Physics 2016 Oct
2-8,
2016
  1 talk: http://indico-new.jinr.ru/event/ntihep2016
ISMD 2016 Aug 29-Sep 2, 2016   3 talks (2 HI, 1 spin):
Prithwish Tribedy, Jie Zhao, Qinghua Xu
direct invite: Xiaofeng Luo, Fuqiang Wang
https://nuclear.korea.ac.kr/indico/conferenceDisplay.py?ovw=True&confId=166
Qinghua Xu - talk canceled
HSQCD 2016 June 27- July 1, 2016   3 talks: R. Lednicky, G. Nigmatkulov, spin - not found
http://hepd.pnpi.spb.ru/~hsqcd/

 
ICNFP 2016 July 6-14,
2016
  3 talks: HI overview:Petr Chaloupka,
BES and SPIN nobody found
direct invite: Michael Lomnitz
https://indico.cern.ch/event/442094/
Baldin seminar  Sep 19-24, 2016   HI overview talk: Michael Tokarev
HF topical talk: Pavol Federic
http://relnp.jinr.ru/ishepp/
J-PARC-HI workshop  Aug 8-9, 2016   1 talk: Daniel Cebra  
         
SPIN2016  Sep 25-30, 2016   2 talks: Devika Gunarathne, Suvarna Ramachandran http://conferences.physics.illinois.edu/spin2016/
Hard Probes 2016  Sep 23 - 27, 2016   1 talk: Bingchu Huang  https://indico.cern.ch/event/502239/
Diffraction 2016 Sep 2-8, 2016   3 talks: Leszek Adamczyk, Lukasz Fulek, Rafal Sikora http://www.cs.infn.it/diff2016
MPI at LHC 2016 Nov 28 - Dec 3, 2016   3 talks: Li Yi, Akio Ogawa, Zhenyu Ye  http://www.nucleares.unam.mx/MPI2016/
HINT2016 December 5-8, 2016   1 talk: Zhangbu Xu  http://j-parc.jp/pn/HINT2016
 YSTAR2016 November 16-17, 2016   1 talk: Aihong Tang  
https://www.jlab.org/conferences/YSTAR2016/index.html
 
APS "Spring" meeting 2017   January 28-31, 2017   1 spin talk: Salvatore Fazio  

 

 WWND2017  January 8-14, 2017 November 15, 2016 7-8 talks: David Tlusty, Frank Geurts, Zach Miller, Shenghui Zhang, Niseem Magdy  
 https://indico.cern.ch/event/555216/
 
QM2017 February 6-11, 2017   plenary talk: Alexander Schmah
 
 http://qm2017.phy.uic.edu/  
5th Workshop on QCD and Diffraction at LHC, Saturation 1000+ December 5-7, 2016   1 talk: Wlodek Guryn http://qcdworkshop.ifj.edu.pl  
Santa Fe Heavy-Flavor and Jet workshop February 13-15, 2017   1 talk: Kolja Kauder    
QCD ECT* workshop February 27-March 3, 2017   1 talk: Petr Chaloupka https://indico.cern.ch/event/589766/overview  
QCD Moriond 2017 March 27-April 1, 2017   2 talks: HF (Rongrong Ma), CME (Fuqiang Wang) http://moriond.in2p3.fr/QCD/2017/
Fuqiang canceled - talk given by Zhangbu Xu
 
Excited QCD 2017 May 7-13, 2017   1 talk: no speaker found https://indico.cern.ch/event/607808/  
PHOTON 2017 May 22-27, 2017   1 talk: Jaroslav Adam https://indico.cern.ch/e/photon2017  
RHIC/AGS Users Meeting  June 20-23, 2017   5 talks:
HI overview: Lijuan Ruan
STAR upgrades: 
Prashanth Shanmuganathan Run17 report: Jinlong Zhang
BES talk: Terry Tarnowsky
HF talk: Zhenyu Ye
   
RHIC/AGS Users Meeting HF+quarkonium workshop June 20-23, 2017   1 talk: Michael Lomnitz    
RHIC/AGS Users Meeting Isobar Program - CME, Isospin and Beyond workshop June 20-23, 2017   1 talk: Isaac Upsal    
RHIC/AGS Users Meeting spin workshop June 20-23, 2017   1 talk: Oleg Eyser    
RBRC workshop BNL June 26-28, 2017   1 talk: Elke Aschenauer https://www.bnl.gov/pppa17/  
QFTHEP 2017 June 26 - July 3, 2017   1 talk: Vitalii Okorokov  http://qfthep.sinp.msu.ru  
SQM 2017 July 10-15, 2017   2 talks: HI overview: Wangmei Zha, upgrade:Flemming Videbaek http://www.sqm2017.nl  
ICNFP 2017 August 17-26, 2017   4+ talks: HF+jets: Jaroslav Bielcik, spin:Carl Gagliardi, BES:?, STAR upgrades: Chi Yang
direct invitation:
Lokesh Kumar (light hadron production)
 https://indico.cern.ch/event/559774/
Lokesh could not attend in the end, his talk given by Chi Yang
 
18th Lomonosov conference  August 24-30, 2017   direct invitation: Zhangbu Xu  http://www.icas.ru/english/index.htm  
DSPIN2017 September 11-15, 2017    1 talk: Mikhail Tokarev  http://theor.jinr.ru/~spin/2017/  
WE-Heraeus Physics School
QCD - Old Challenges and New Opportunities
September 24-30, 2017    1 talk (on behalf of RHIC exps.): M. Przybycien https://indico.cern.ch/event/614845  
Initial Stages 2017  September 18-22, 2017    1 plenary talk: Niseem Magdy
 

http://www.ujk.edu.pl/is2017/

 
HESZ workshop 2017:

"Forward Physics
and High-Energy Scattering at Zero Degrees"

September 26-29, 2017   overview talk: Carl Gagliardi
topical talks:
https://indico.cern.ch/event/590974  
ICPPA 2017, Moscow October
2-5, 2017
  1 overview talk: Olga Evdokimov    http://indico.cfr.mephi.ru/event/14/  
High pT physics for RHIC and LHC workshop October 2-5, 2017   3 talks: Li Yi, Yi Yang, Wangmei Zha    https://indico.cern.ch/event/615153/  
3rd Resonance Workshop Bergamo     1 talk: nobody found
direct invite: Richard Witt, Subhash Singha
https://indico.ktas.ph.tum.de/event/1/timetable/?view=standard 
note: Subhash Singha cannot go, talk given by Richard Witt
 
Quarkonia Working Group Meeting 2017 November 6-10, 2017   1 talk: Yi Yang    
MPI@LHC 2017 workshop December 11-15, 2017   3 talks: Mriganka Mondal, Md. Nasim, Bedanga Mohanty

https://indico.cern.ch/event/625304/

 
HEP2018, Chile January 8-12, 2018   1+ plenary talks: Alexandre Suaide  

https://indico.cern.ch/event/628450/

 
QCD Moriond 2018 March 17-24, 2018   2 plenary talks: HF: Zebo Tang, spin: Adam Gibson-Even  http://moriond.in2p3.fr/QCD/2018/  
Excited QCD 2018 March 11-15, 2018   1 talk: Pavol Federic  https://indico.cern.ch/event/685849/  
Workshop on Chirality, Vorticity and Magnetic
Field in Heavy Ion Collisions 2018
March 19-22, 2018   direct invite: Gang Wang, Aihong Tang  https://agenda.infn.it/conferenceDisplay.py?confId=13907  
WWND 2018 March 25-31, 2018    nobody found  https://indico.cern.ch/event/664310/  
DIS 2018 April 16-20, 2018   Elke-Caroline Aschenauer  https://indico.cern.ch/event/656250/  
Quark Matter 2018 May 13-19, 2018   QM plenary: Zhenyu Ye  https://qm2018.infn.it  
WPCF 2018 May 22-26, 2018   5 talks: Hanna Zbroszczyk, Grigory Nigmatkulov, Pawel  Szymanski,  Andrzej Lipiec http://wpcf2018.ifj.edu.pl  
CIPANP 2018 May 29- June 3, 2018   1 talk: Oleg Eyser http://cipanp18.berkeley.edu/  
MESON 2018 June 7-12, 2018   1 plenary talk: Hanna Zbroszczyk  http://meson.if.uj.edu.pl/  
HYP2018 June 24-29,
2018
  direct invite:Isaac Upsal, Jinhui Chen  https://www.jlab.org/conferences/hyp2018/  
CPOD 2018 September 24-28, 2018   1 plenary talk: Grazyna Odyniec    
RHIC/AGS Users Meeting  June 12-15, 2018   Physics Highlights: Gang Wang
Run 18 report: JH Lee, STAR upgrade: Qinghua Xu
 https://www.bnl.gov/aum2018/  
RHIC/AGS Users Meeting:
Jets/High-pT workshop
June 12-13, 2018   2 talks: Derek Anderson, Nick Elsey  https://www.bnl.gov/aum2018/  
RHIC/AGS Users Meeting - BES workshop June 12, 2018   3-4 talks: Isaac Upsal, Prashanth Shanmuganathan, Benjamin Schweid, Arghya Chatterjee  https://www.bnl.gov/aum2018/  
RHIC/AGS Users Meeting - Heavy Flavor workshop June 12-13, 2018   2 talks:
open HF: Guannan Xie
quarkonia: Shuai Yang
 https://www.bnl.gov/aum2018/  
RHIC/AGS Users Meeting:
Workshop on spin and forward physics
June 12, 2018   3 talks: gluon helicity: Ting Lin
W boson: Matt Posik
transverse spin: Kevin Adkins
 
 https://www.bnl.gov/aum2018/  
ICNFP 2018 July 4-12, 2018   7+ talks: only one found:
HI overview: Sonia Kabana
https://indico.cern.ch/event/663474/overview  
PIC 2018 September 11-15, 2018   1 RHIC overview:
Shusu Shi
https://indico.cern.ch/event/674840/  
ISHEPP 2018 (Baldin seminar) September 17-22, 2018   1 HI plenary talk: Alexey Aparin http://relnp.jinr.ru/ishepp/  
SPIN 2018 September 10-14, 2018   3 talks:
W A_L: Qinghua Xu
gluon helicity: Christopher Dilks
TMD: Carl Gagliardi
http://spin2018.unife.it  
Hard Probes 2018 October 1-5, 2018   1 plenary talk: Sooraj Radhakrishnan https://indico.cern.ch/event/634426/  
VERTEX 2018 October 21-26, 2018   1 talk: J. Webb https://indico.cern.ch/event/710050/overview  
ICPPA 2018 October 22-26, 2018   1 talk: G. Nigmatkulkov  https://indico.particle.mephi.ru/e/ICPPA2018  
CFNS 2018 workshop: Forward Physics and Instrumentation from Colliders to Cosmic Rays October 17-19, 2018   1 talk: O. Eyser  https://indico.bnl.gov/event/4737  
ATHIC 2018 November 3-6, 2018   1 talk: direct invite Yi Yang http://pnp.ustc.edu.cn/athic2018/  
MPI@LHC 2018 December 10-14, 2018   2 talks:
low-x+diffraction:
small-x
 
 https://indico.cern.ch/event/736470/  
WWND 2019  January 6-12, 2019   up to 10 talks:  https://indico.cern.ch/event/766194/
 
 
HFM-2019 March 18-20, 2019   1 talk: Md. Nasim  https://indico.cern.ch/event/760884  
TEST          
           

Done: #99ccff
Direct: #99ffff
Failed:#ff0000

Nominated Speakers 6/07 thru 6/08

Nominated speakers 2016-2018




Conference

Date

Abstracts due

Speakers(s) (talk uploaded/not uploaded/not uploaded but direct invite)

Webpage

 

RIKEN-BNL workshop

"Opportunities for Exploring Longitudinal Dynamics in Heavy Ion Collisions at RHIC

Jan 20-22, 2016

 

1 talk: Fleming Videbaek

Note: talk given by Zhangbu Xu (conflict with other meeting)

 

WWND2016

Feb. 28 - Mar. 5, 2016

 

7 talks: Bingchu Huang, Ji Xu, Liang He, Kathryn Meehan, Long Ma, Kolja Kauder, Isaac Upsal

https://indico.cern.ch/event/435555/

Note: Bingchu did not go - visa

 

QCD Moriond 2016

Mar 19-26, 2016

 

2 talks: Brian Page, Manuel Calderon de la Barca Sanchez

https://cds.cern.ch/record/2069145?ln

 

ICHEP 2016

Aug 3-10, 2016

February 7, 2016

3 talks:Hao Qiu, Nihar Sahoo, Yadav Pandit

http://www.ichep2016.org/

 

PIXEL 2016

Sep 5-9,

2016

 

1 talk:  Giacomo Contin

http://pixel2016.ge.infn.it

 

BARYONS 2016

May 16-20, 2016

 

1 talk: no speaker found

http://baryons2016.physics.fsu.edu/

 

QCHS12

Aug 29 - Sep 3, 2016

 

1 talk:

https://indico.cern.ch/event/353906/overview

 

CPOD2016

May 30-June 4, 2016

 

1 talk: Z. Xu

http://ift.uni.wroc.pl/~cpod2016/

 

3rd Elba workshop on Forward Physics @ LHC energy

May 30-June 1, 2016

 

direct invite: Kin Yip

https://indico.cern.ch/event/505807/

 

RHIC/AGS Users Meeting

June 9-10, 2016

 

3 talks: David Tlusty, Rosi Reed, Rongrong Ma

   

RHIC/AGS Users Meeting, Spin session

June 7, 2016

 

1 talk:

NOTE: original talk on STAR forward upgrades cancelled by the organizers after the agreement with SPIN conveners due to a large overlap with PHENIX. We tried to get another one on polarized pA program instead: the organizers refused it :-(

 

Quarkonia Working Group workshop

June

6-10, 

2016

 

3 talks: Zhenyu Ye, Takahito Todoroki,

Siwei Luo

http://events.pnnl.gov/default.aspx?topic=QWG2016

 

YITP workshop on Meson in Nucleus

Jul 31-Aug 2, 2016

 

1 talk: Yi Guo

http://menu2016.riken.jp

 

SQM 2016

June 27 - July 1, 2016

 

3 talks: Zhenyu Ye, Shusu Shi, Alexander Schmah

http://sqm2016.lbl.gov

 

QCDN 2016

Jul 11-15, 2016

 

2 talks: Grant Webb, Stephen Trentalange

http://tp.lc.ehu.es/QCD-N2016/

 

BEACH 2016

June 12-18, 2016

April 15, 2016

1 talk: Zhengqiao Zhang

http://beach2016.gmu.edu

 

VERTEX 2016

Sep 26-30, 2016

 

1 talk: Leo Greiner

http://indico.cern.ch/e/vertex2016

 

New Trends in High Energy Physics 2016

Oct

2-8, 

2016

 

1 talk:

http://indico-new.jinr.ru/event/ntihep2016

 

ISMD 2016

Aug 29-Sep 2, 2016

 

3 talks (2 HI, 1 spin):

Prithwish Tribedy, Jie Zhao, Qinghua Xu

direct invite: Xiaofeng Luo, Fuqiang Wang

https://nuclear.korea.ac.kr/indico/conferenceDisplay.py?ovw=True&confId=166

Qinghua Xu - talk canceled

 

HSQCD 2016

June 27- July 1, 2016

 

3 talks: R. Lednicky, G. Nigmatkulov, spin - not found

http://hepd.pnpi.spb.ru/~hsqcd/

 

ICNFP 2016

July 6-14, 

2016

 

3 talks: HI overview:Petr Chaloupka,

BES and SPIN nobody found

direct invite: Michael Lomnitz

https://indico.cern.ch/event/442094/

 

Baldin seminar

Sep 19-24, 2016

 

HI overview talk: Michael Tokarev

HF topical talk: Pavol Federic

http://relnp.jinr.ru/ishepp/

 

J-PARC-HI workshop

Aug 8-9, 2016

 

1 talk: Daniel Cebra

   

SPIN2016

Sep 25-30, 2016

 

2 talks: Devika Gunarathne, Suvarna Ramachandran

http://conferences.physics.illinois.edu/spin2016/

 

Hard Probes 2016

Sep 23 - 27, 2016

 

1 talk: Bingchu Huang

https://indico.cern.ch/event/502239/

 

Diffraction 2016

Sep 2-8, 2016

 

3 talks: Leszek Adamczyk, Lukasz Fulek, Rafal Sikora

http://www.cs.infn.it/diff2016

 

MPI at LHC 2016

Nov 28 - Dec 3, 2016

 

3 talks: Li Yi, Akio Ogawa, Zhenyu Ye

http://www.nucleares.unam.mx/MPI2016/

 

HINT2016

December 5-8, 2016

 

1 talk: Zhangbu Xu

http://j-parc.jp/pn/HINT2016

 

YSTAR2016

November 16-17, 2016

 

1 talk: Aihong Tang

https://www.jlab.org/conferences/YSTAR2016/index.html

 

APS "Spring" meeting 2017

January 28-31, 2017

 

1 spin talk: Salvatore Fazio

   

WWND2017

January 8-14, 2017

November 15, 2016

7-8 talks: David Tlusty, Frank Geurts, Zach Miller, Shenghui Zhang, Niseem Magdy

https://indico.cern.ch/event/555216/

 

QM2017

February 6-11, 2017

 

plenary talk: Alexander Schmah

http://qm2017.phy.uic.edu/

 

5th Workshop on QCD and Diffraction at LHC, Saturation 1000+

December 5-7, 2016

 

1 talk: Wlodek Guryn

http://qcdworkshop.ifj.edu.pl

 

Santa Fe Heavy-Flavor and Jet workshop

February 13-15, 2017

 

1 talk: Kolja Kauder

   

QCD ECT* workshop

February 27-March 3, 2017

 

1 talk: Petr Chaloupka

https://indico.cern.ch/event/589766/overview

 

QCD Moriond 2017

March 27-April 1, 2017

 

2 talks: HF (Rongrong Ma), CME (Fuqiang Wang)

http://moriond.in2p3.fr/QCD/2017/

Fuqiang canceled - talk given by Zhangbu Xu

 

Excited QCD 2017

May 7-13, 2017

 

1 talk: no speaker found

https://indico.cern.ch/event/607808/

 

PHOTON 2017

May 22-27, 2017

 

1 talk: Jaroslav Adam

https://indico.cern.ch/e/photon2017

 

RHIC/AGS Users Meeting

June 20-23, 2017

 

5 talks: 

HI overview: Lijuan Ruan

STAR upgrades:  

Prashanth Shanmuganathan Run17 report: Jinlong Zhang

BES talk: Terry Tarnowsky

HF talk: Zhenyu Ye

   

RHIC/AGS Users Meeting HF+quarkonium workshop

June 20-23, 2017

 

1 talk: Michael Lomnitz

   

RHIC/AGS Users Meeting Isobar Program - CME, Isospin and Beyond workshop

June 20-23, 2017

 

1 talk: Isaac Upsal

   

RHIC/AGS Users Meeting spin workshop

June 20-23, 2017

 

1 talk: Oleg Eyser

   

RBRC workshop BNL

June 26-28, 2017

 

1 talk: Elke Aschenauer

https://www.bnl.gov/pppa17/

 

QFTHEP 2017

June 26 - July 3, 2017

 

1 talk: Vitalii Okorokov

http://qfthep.sinp.msu.ru

 

SQM 2017

July 10-15, 2017

 

2 talks: HI overview: Wangmei Zha, upgrade:Flemming Videbaek

http://www.sqm2017.nl

 

ICNFP 2017

August 17-26, 2017

 

4+ talks: HF+jets: Jaroslav Bielcik, spin:Carl Gagliardi, BES:?, STAR upgrades: Chi Yang

direct invitation:

Lokesh Kumar (light hadron production)

https://indico.cern.ch/event/559774/

Lokesh could not attend in the end, his talk given by Chi Yang

 

18th Lomonosov conference

August 24-30, 2017

 

direct invitation: Zhangbu Xu

http://www.icas.ru/english/index.htm

 

DSPIN2017

September 11-15, 2017

 

1 talk: Mikhail Tokarev

http://theor.jinr.ru/~spin/2017/

 

WE-Heraeus Physics School

QCD - Old Challenges and New Opportunities

September 24-30, 2017

 

1 talk (on behalf of RHIC exps.): M. Przybycien

https://indico.cern.ch/event/614845

 

Initial Stages 2017

September 18-22, 2017

 

1 plenary talk: Niseem Magdy

http://www.ujk.edu.pl/is2017/

 

HESZ workshop 2017:

"Forward Physics

and High-Energy Scattering at Zero Degrees"

September 26-29, 2017

 

overview talk: Carl Gagliardi

topical talks:

https://indico.cern.ch/event/590974

 

ICPPA 2017, Moscow

October

2-5, 2017

 

1 overview talk: Olga Evdokimov

http://indico.cfr.mephi.ru/event/14/

 

High pT physics for RHIC and LHC workshop

October 2-5, 2017

 

3 talks: Li Yi, Yi Yang, Wangmei Zha

https://indico.cern.ch/event/615153/

 

3rd Resonance Workshop Bergamo

   

1 talk: nobody found

direct invite: Richard Witt, Subhash Singha

https://indico.ktas.ph.tum.de/event/1/timetable/?view=standard  

note: Subhash Singha cannot go, talk given by Richard Witt

 

Quarkonia Working Group Meeting 2017

November 6-10, 2017

 

1 talk: Yi Yang

   

MPI@LHC 2017 workshop

December 11-15, 2017

 

3 talks: Mriganka Mondal, Md. Nasim, Bedanga Mohanty

https://indico.cern.ch/event/625304/

 

HEP2018, Chile

January 8-12, 2018

 

1+ plenary talks: Alexandre Suaide

https://indico.cern.ch/event/628450/

 

QCD Moriond 2018

March 17-24, 2018

 

2 plenary talks: HF: Zebo Tang, spin: Adam Gibson-Even

http://moriond.in2p3.fr/QCD/2018/

 

Excited QCD 2018

March 11-15, 2018

 

1 talk: Pavol Federic

https://indico.cern.ch/event/685849/

 

Workshop on Chirality, Vorticity and Magnetic 

Field in Heavy Ion Collisions 2018

March 19-22, 2018

 

direct invite: Gang Wang, Aihong Tang

https://agenda.infn.it/conferenceDisplay.py?confId=13907

 

WWND 2018

March 25-31, 2018

 

nobody found

https://indico.cern.ch/event/664310/

 

DIS 2018

April 16-20, 2018

 

Elke-Caroline Aschenauer

https://indico.cern.ch/event/656250/

 

Quark Matter 2018

May 13-19, 2018

 

QM plenary: Zhenyu Ye

https://qm2018.infn.it

 

WPCF 2018

May 22-26, 2018

 

5 talks: Hanna Zbroszczyk, Grigory Nigmatkulov, Pawel  Szymanski,  Andrzej Lipiec

http://wpcf2018.ifj.edu.pl

 

CIPANP 2018

May 29- June 3, 2018

 

1 talk: Oleg Eyser

http://cipanp18.berkeley.edu/

 

MESON 2018

June 7-12, 2018

 

1 plenary talk: Hanna Zbroszczyk

http://meson.if.uj.edu.pl/

 

HYP2018

June 24-29,

2018

 

direct invite:Isaac Upsal, Jinhui Chen

https://www.jlab.org/conferences/hyp2018/

 

CPOD 2018

September 24-28, 2018

 

1 plenary talk: Grazyna Odyniec

   

RHIC/AGS Users Meeting

June 12-15, 2018

 

Physics Highlights: Gang Wang

Run 18 report: JH Lee, STAR upgrade: Qinghua Xu

https://www.bnl.gov/aum2018/

 

RHIC/AGS Users Meeting:

Jets/High-pT workshop

June 12-13, 2018

 

2 talks: Derek Anderson, Nick Elsey

https://www.bnl.gov/aum2018/

 

RHIC/AGS Users Meeting - BES workshop

June 12, 2018

 

3-4 talks: Isaac Upsal, Prashanth Shanmuganathan, Benjamin Schweid, Arghya Chatterjee

https://www.bnl.gov/aum2018/

 

RHIC/AGS Users Meeting - Heavy Flavor workshop

June 12-13, 2018

 

2 talks: 

open HF: Guannan Xie

quarkonia: Shuai Yang

https://www.bnl.gov/aum2018/

 

RHIC/AGS Users Meeting:

Workshop on spin and forward physics

June 12, 2018

 

3 talks: gluon helicity: Ting Lin

W boson: Matt Posik

transverse spin: Kevin Adkins

https://www.bnl.gov/aum2018/

 

ICNFP 2018

July 4-12, 2018

 

7+ talks: only one found:

HI overview: Sonia Kabana

https://indico.cern.ch/event/663474/overview

 

PIC 2018

September 11-15, 2018

 

1 RHIC overview:

Shusu Shi

https://indico.cern.ch/event/674840/

 

ISHEPP 2018 (Baldin seminar)

September 17-22, 2018

 

1 HI plenary talk: Alexey Aparin

http://relnp.jinr.ru/ishepp/

 

SPIN 2018

September 10-14, 2018

 

3 talks: 

W A_L: Qinghua Xu

gluon helicity: Christopher Dilks

TMD: Carl Gagliardi

http://spin2018.unife.it

 

Hard Probes 2018

October 1-5, 2018

 

1 plenary talk: Sooraj Radhakrishnan

https://indico.cern.ch/event/634426/

 

VERTEX 2018

October 21-26, 2018

 

1 talk: J. Webb

https://indico.cern.ch/event/710050/overview

 

ICPPA 2018

October 22-26, 2018

 

1 talk: G. Nigmatkulkov

https://indico.particle.mephi.ru/e/ICPPA2018

 

CFNS 2018 workshop: Forward Physics and Instrumentation from Colliders to Cosmic Rays

October 17-19, 2018

 

1 talk: O. Eyser

https://indico.bnl.gov/event/4737

 

ATHIC 2018

November 3-6, 2018

 

1 talk: direct invite Yi Yang

http://pnp.ustc.edu.cn/athic2018/

 

MPI@LHC 2018

December 10-14, 2018

 

2 talks:

low-x+diffraction:

small-x

https://indico.cern.ch/event/736470/

 

WWND 2019

January 6-12, 2019

 

up to 10 talks:

https://indico.cern.ch/event/766194/

 

HFM-2019

March 18-20, 2019

 

1 talk: Md. Nasim

https://indico.cern.ch/event/760884

Done: #99ccff Direct: #99ffff Failed:#ff0000

STAR Talks Committee Recommendations

See the STAR Talks Mailing list and STAR Presentations Link on

the Homepage for  up to date information.

 

Standing Roster of STAR Talks Nominees

 

Standing Roster of STAR Talks Nominees

The following members of the STAR collaboration are eligible to be selected for STAR Talks. If your name is not on this list, and you would like it to appear here, please contact you Council member or the Chair of the STAR Talks Committee.

Please notify the Chair if there are errors or incorrect information on this list.

Last updated on 5/17/2012.

 

   

 

L. Adamczyk1, G. Agakishiev21, M.M. Aggarwal33, Z. Ahammed51, A.V. Alakhverdyants21, I. Alekseev19, J. Alford22, B.D. Anderson22, C.D. Anson30, D. Arkhipkin4, E. Aschenauer4, G.S. Averichev21, J. Balewski26, A. Banerjee51, Z. Barnovska 14, D.R. Beavis4, R. Bellwied47, M.J. Betancourt26, R.R. Betts10, A. Bhasin20, A.K. Bhati33, H. Bichsel53, J. Bielcik13, J. Bielcikova14, L.C. Bland4, I.G. Bordyuzhin19, W. Borowski44, J. Bouchet22, A.V. Brandin29, S.G. Brovko6, E. Bruna55, S. Bueltmann31, I. Bunzarov21, T.P. Burton4, J. Butterworth39, X.Z. Cai43, H. Caines55, M. Calderon6, D. Cebra6, R. Cendejas7, M.C. Cervantes45, P. Chaloupka14, Z. Chang45, S. Chattopadhyay51, H.F. Chen41, J.H. Chen43, J.Y. Chen9, L. Chen9, J. Cheng48, M. Cherney12, A. Chikanian55, W. Christie4, P. Chung14, J. Chwastowski11, M.J.M. Codrington45, R. Corliss26, J.G. Cramer53, H.J. Crawford5, X. Cui41, A. Davila Leyva46, L.C. De Silva47, R.R. Debbe4, T.G. Dedovich21, J. Deng42, R. Derradi de Souza8, S. Dhamija18, L. Didenko4, F. Ding6, A. Dion4, P. Djawotho45, X. Dong25, J.L. Drachenberg45, J.E. Draper6, C.M. Du24, L.E. Dunkelberger7, J.C. Dunlop4, L.G. Efimov21, M. Elnimr54, J. Engelage5, G. Eppley39, L. Eun25, O. Evdokimov10, R. Fatemi23, S. Fazio4, J. Fedorisin21, R.G. Fersch23, P. Filip21, E. Finch55, Y. Fisyak4, C.A. Gagliardi45, D.R. Gangadharan30, F. Geurts39, A. Gibson50, S. Gliske2, Y.N. Gorbunov12, O.G. Grebenyuk25, D. Grosnick50, S. Gupta20, W. Guryn4, B. Haag6, O. Hajkova13, A. Hamed45, L-X. Han43, J.W. Harris55, J.P. Hays-Wehle26, S. Heppelmann34, A. Hirsch36, G.W. Hoffmann46, D.J. Hofman10, S. Horvat55, B. Huang4, H.Z. Huang7, P. Huck9, T.J. Humanic30, L. Huo45, G. Igo7, W.W. Jacobs18, C. Jena16, J. Joseph22, E.G. Judd5, S. Kabana44, K. Kang48, J. Kapitan14, K. Kauder10, H.W. Ke9, D. Keane22, A. Kechechyan21, A. Kesich6, D. Kettler53, D.P. Kikola36, J. Kiryluk25, I. Kisel25, A. Kisiel52, V. Kizka21, S.R. Klein25, D.D. Koetke50, T. Kollegger15, J. Konzer36, I. Koralt31, L. Koroleva19, W. Korsch23, L. Kotchenda29, P. Kravtsov29, K. Krueger2, I. Kulakov25, L. Kumar22, M.A.C. Lamont4, J.M. Landgraf4, S. LaPointe54, J. Lauret4, A. Lebedev4, R. Lednicky21, J.H. Lee4, W. Leight26, M.J. LeVine4, C. Li41, L. Li46, W. Li43, X. Li36, X. Li42, Y. Li48, Z.M. Li9, L.M. Lima40, M.A. Lisa30, F. Liu9, T. Ljubicic4, W.J. Llope39, R.S. Longacre4, Y. Lu41, X. Luo9, A. Luszczak11, G.L. Ma43, Y.G. Ma43, D.M.M.D. Madagodagettige Don12, D.P. Mahapatra16, R. Majka55, O.I. Mall6, S. Margetis22, C. Markert46, H. Masui25, H.S. Matis25, D. McDonald39, T.S. McShane12, S. Mioduszewski45, M.K. Mitrovski4, Y. Mohammed45, B. Mohanty51, M.M. Mondal45, B. Morozov19, M.G. Munhoz40, M.K. Mustafa36, M. Naglis25, B.K. Nandi17, Md. Nasim51, T.K. Nayak51, J.M. Nelson3, L.V. Nogach35, J. Novak28, G. Odyniec25, A. Ogawa4, K. Oh37, A. Ohlson55, V. Okorokov29, E.W. Oldag46, R.A.N. Oliveira40, D. Olson25, P. Ostrowski52, M. Pachr13, B.S. Page18, S.K. Pal51, Y.X. Pan7, Y. Pandit22, Y. Panebratsev21, T. Pawlak52, B. Pawlik32, H. Pei10, C. Perkins5, W. Peryt52, P. Pile4, M. Planinic56, J. Pluta52, D. Plyku31, N. Poljak56, J. Porter25, A.M. Poskanzer25, C.B. Powell25, D. Prindle53, C. Pruneau54, N.K. Pruthi33, M. Przybycien1, P.R. Pujahari17, J. Putschke54, H. Qiu25, R. Raniwala38, S. Raniwala38, R.L. Ray46, R. Redwine26, R. Reed6, C.K. Riley55, H.G. Ritter25, J.B. Roberts39, O.V. Rogachevskiy21, J.L. Romero6, J.F. Ross12, L. Ruan4, J. Rusnak14, N.R. Sahoo51, I. Sakrejda25, S. Salur25, A. Sandacz52, J. Sandweiss55, E. Sangaline6, A. Sarkar17, J. Schambach46, R.P. Scharenberg36, A.M. Schmah25, B. Schmidke4, N. Schmitz27, T.R. Schuster15, J. Seele26, J. Seger12, P. Seyboth27, N. Shah7, E. Shahaliev21, M. Shao41, B. Sharma33, M. Sharma54, S.S. Shi9, Q.Y. Shou43, E.P. Sichtermann25, R.N. Singaraju51, M.J. Skoby18, D. Smirnov4, N. Smirnov55, D. Solanki38, P. Sorensen4, U.G. de Souza40, H.M. Spinka2, B. Srivastava36, T.D.S. Stanislaus50, S.G. Steadman26, J.R. Stevens18, R. Stock15, M. Strikhanov29, B. Stringfellow36, A.A.P. Suaide40, M.C. Suarez10, M. Sumbera14, X.M. Sun25, Y. Sun41, Z. Sun24, B. Surrow26, D.N. Svirida19, T.J.M. Symons25, A. Szanto de Toledo40, J. Takahashi8, A.H. Tang4, Z. Tang41, L.H. Tarini54, T. Tarnowsky28, D. Thein46, J.H. Thomas25, J. Tian43, A.R. Timmins47, D. Tlusty14, M. Tokarev21, T.A. Trainor53, S. Trentalange7, R.E. Tribble45, P. Tribedy51, B.A. Trzeciak52, O.D. Tsai7, J. Turnau32, T. Ullrich4, D.G. Underwood2, G. Van Buren4, G. van Nieuwenhuizen26, J.A. Vanfossen, Jr.22, R. Varma17, G.M.S. Vasconcelos8, F. Videbaek4, Y.P. Viyogi51, S. Vokal21, S.A. Voloshin54, A. Vossen18, M. Wada46, F. Wang36, G. Wang7, H. Wang28, J.S. Wang24, Q. Wang36, X.L. Wang41, Y. Wang48, G. Webb23, J.C. Webb4, G.D. Westfall28, C. Whitten Jr.7, H. Wieman25, S.W. Wissink18, R. Witt49, W. Witzke23, Y.F. Wu9, Z. Xiao48, W. Xie36, K. Xin39, H. Xu24, N. Xu25, Q.H. Xu42, W. Xu7, Y. Xu41, Z. Xu4, L. Xue43, Y. Yang24, Y. Yang9, P. Yepes39, Y. Yi36, K. Yip4, I-K. Yoo37, M. Zawisza52, H. Zbroszczyk52, J.B. Zhang9, S. Zhang43, W.M. Zhang22, X.P. Zhang48, Y. Zhang41, Z.P. Zhang41, F. Zhao7, J. Zhao43, C. Zhong43, X. Zhu48, Y.H. Zhu43, Y. Zoulkarneeva21, M. Zyzak25,

1AGH University of Science and Technology, Cracow, Poland
2Argonne National Laboratory, Argonne, Illinois 60439
3University of Birmingham, Birmingham, United Kingdom
4Brookhaven National Laboratory, Upton, New York 11973
5University of California, Berkeley, California 94720
6University of California, Davis, California 95616
7University of California, Los Angeles, California 90095
8Universidade Estadual de Campinas, Sao Paulo, Brazil
9Central China Normal University (HZNU), Wuhan 430079, China
10University of Illinois at Chicago, Chicago, Illinois 60607
11Cracow University of Technology, Cracow, Poland
12Creighton University, Omaha, Nebraska 68178
13Czech Technical University in Prague, FNSPE, Prague, 115 19, Czech Republic
14Nuclear Physics Institute AS CR, 250 68 \v{R}e\v{z}/Prague, Czech Republic
15University of Frankfurt, Frankfurt, Germany
16Institute of Physics, Bhubaneswar 751005, India
17Indian Institute of Technology, Mumbai, India
18Indiana University, Bloomington, Indiana 47408
19Alikhanov Institute for Theoretical and Experimental Physics, Moscow, Russia
20University of Jammu, Jammu 180001, India
21Joint Institute for Nuclear Research, Dubna, 141 980, Russia
22Kent State University, Kent, Ohio 44242
23University of Kentucky, Lexington, Kentucky, 40506-0055
24Institute of Modern Physics, Lanzhou, China
25Lawrence Berkeley National Laboratory, Berkeley, California 94720
26Massachusetts Institute of Technology, Cambridge, MA 02139-4307
27Max-Planck-Institut f\"ur Physik, Munich, Germany
28Michigan State University, East Lansing, Michigan 48824
29Moscow Engineering Physics Institute, Moscow Russia
30Ohio State University, Columbus, Ohio 43210
31Old Dominion University, Norfolk, VA, 23529
32Institute of Nuclear Physics PAN, Cracow, Poland
33Panjab University, Chandigarh 160014, India
34Pennsylvania State University, University Park, Pennsylvania 16802
35Institute of High Energy Physics, Protvino, Russia
36Purdue University, West Lafayette, Indiana 47907
37Pusan National University, Pusan, Republic of Korea
38University of Rajasthan, Jaipur 302004, India
39Rice University, Houston, Texas 77251
40Universidade de Sao Paulo, Sao Paulo, Brazil
41University of Science \& Technology of China, Hefei 230026, China
42Shandong University, Jinan, Shandong 250100, China
43Shanghai Institute of Applied Physics, Shanghai 201800, China
44SUBATECH, Nantes, France
45Texas A\&M University, College Station, Texas 77843
46University of Texas, Austin, Texas 78712
47University of Houston, Houston, TX, 77204
48Tsinghua University, Beijing 100084, China
49United States Naval Academy, Annapolis, MD 21402
50Valparaiso University, Valparaiso, Indiana 46383
51Variable Energy Cyclotron Centre, Kolkata 700064, India
52Warsaw University of Technology, Warsaw, Poland
53University of Washington, Seattle, Washington 98195
54Wayne State University, Detroit, Michigan 48201
55Yale University, New Haven, Connecticut 06520
56University of Zagreb, Zagreb, HR-10002, Croatia

Users area

STAR User's home page area.

 

Bedagadas Mohanty

 

 

 

Figures from STAR publications

Baryon to Meson ratio  v2 vs. Beam energy  low pT spectra    particle ratio

 

Jet                              Typical dijet-event         Raa/dau 

 

Theory Figures

Cross-over                   Phasediagram

 

 

 

 

Ilya Selyuzhenkov

Ilya Selyuzhenkov for the STAR Collaboration

My analysis links:

STAR papers (as a principal author)

  1. Azimuthal Charged-Particle Correlations and Possible Local Strong Parity Violation

    Principal authors:
    Sergei Voloshin, Ilya Selyuzhenkov, Jack Sandweiss, Evan Finch, Jim Thomas,
    Vasily Dzordzhadze, Ron Longacre, Yannis Semertzidis, Paul Sorensen, Gang Wang,
    Dhevan Gangadharan, Alexey Chikanian, Richard Majka, Vitaly Okorokov

    Published: Phys. Rev. Lett. 103, 251601 (2009)

  2. Observation of charge-dependent azimuthal correlations
    and possible local strong parity violation in heavy ion collisions

    Principal authors:
    Sergei Voloshin, Ilya Selyuzhenkov, Jack Sandweiss, Evan Finch, Jim Thomas,
    Vasily Dzordzhadze, Ron Longacre, Yannis Semertzidis, Paul Sorensen, Gang Wang,
    Dhevan Gangadharan, Alexey Chikanian, Richard Majka, Vitaly Okorokov

    Target Journal: Phys. Rev. C, arXiv:0909.1717 [nucl-ex]

  3. Charged and strange hadron elliptic flow in Cu+Cu collisions at √sNN = 62.4 and 200 GeV

    Principal authors:
    Feng Liu, Yan Lu, Navneet Pruthi, Ilya Selyuzhenkov, Shusu Shi,
    Xinghua Shi, Paul Sorensen, Aihong Tang, Sergei Voloshin, Nu Xu

    Target Journal: Phys. Rev. C

  4. System-Size Independence of Directed Flow Measured at the BNL Relativistic Heavy-Ion Collider

    Principal authors:
    Jiayun Chen, Declan Keane, Ilya Selyuzhenkov, Aihong Tang, Sergei Voloshin, Gang Wang

    Published: Phys. Rev. Lett. 101, 252301 (2008)

  5. Spin alignment measurements of the K*0(892) and phi(1020) vector mesons
    in heavy ion collisions at √sNN=200 GeV

    Principal authors:
    Jinhui Chen, Xin Dong, Jingguo Ma, Ilya Selyuzhenkov, Zebo Tang

    Published: Phys. Rev. C 77, 061902(R) (2008)

  6. Centrality dependence of charged hadron and strange hadron elliptic flow
    from √sNN=200 GeV Au+Au collisions

    Principal authors:
    Yuting Bai, Yan Lu, Markus Oldenburg, Art Poskanzer, Kai Schweda,
    Ilya Selyuzhenkov, Paul Sorensen, Aihong Tang, Sergei Voloshin, Nu Xu

    Published: Phys. Rev. C 77, 054901 (2008)

  7. Global polarization measurement in Au+Au collisions

    Principal authors:
    Ilya Selyuzhenkov and Sergei Voloshin

    Published: Phys. Rev. C 76, 024915 (2007)

  8. Mass, quark-number, and √sNN dependence of the second and fourth flow harmonics
    in ultrarelativistic nucleus-nucleus collisions

    Principal authors:
    Yuting Bai, Javier Castillo, Xin Dong,
    Kirill Filimonov, Hui Long, Yan Lu, Markus Oldenburg, Kai Schweda, Ilya Selyuzhenkov and Paul Sorensen

    Published: Phys. Rev. C 75, 054906 (2007)

  9. Directed flow in Au+Au collisions at √sNN=62.4 GeV

    Principal authors:
    Declan Keane, Markus Oldenburg, Ilya Selyuzhenkov,
    Aihong Tang, Sergei Voloshin, Gang Wang, Zhangbu Xu

    Published: Phys. Rev. C 73, 034903 (2006)

genevb: miscellaneous working plots

____________________________________________________________
____________________________________________________________
hello :::0 there.




_____________

:::0YYY ZZZ AAACCCDDD

Misc1

 

Misc2

 

misc3

 

Web master

The STAR www site is divided into a few parts, most will converge to a unique Drupal based content management system as time will unfold.

As a Collaboration, STAR makes use of the web for much of its daily functioning: communication, documentation, organization and many other activities. In addition to being a tool for the Collaboration, the web also serves as a means for introducing our experiment, sharing our enthusiasm, and conveying our results to the rest of the interested world. This includes both scientists and non-scientists alike.

For more information about this www site, contact the appropriate person below.

Click here to contribute news items and meeting announcements or a general comment.

 

Andrew Tamis|
David Stewart|
Central site design & maintenance
PHP scripting & support
mySQL database design
STAR publications
 
Jerome Lauret | Server configuration & support
Hypernews mailing lists
Drupal site config
Liz Mogavero | Collaboration support
Meetings & events coordination