• Non ci sono risultati.

CMS Drift Tube Chambers Read-Out Electronics

N/A
N/A
Protected

Academic year: 2021

Condividi "CMS Drift Tube Chambers Read-Out Electronics"

Copied!
5
0
0

Testo completo

(1)

CMS Drift Tube Chambers Read-Out Electronics

J. M. Cela

a

, G. Dellacasa

b

, C. Fernández-Bedoya

a

, J. Marín

a

, V. Monaco

b

, J. C. Oller

a

, P. De Remigis

b

,

A. Staiano

b

, C. Willmott

a

.

a CIEMAT, Avda. Complutense, 22. 28040 Madrid, Spain. b INFN Sezione di Torino, Italy.

cristina.fernandez@ciemat.es

gdellaca@to.infn.it

Abstract

Being close to the completion of CMS installation, the three levels of the final read-out system of the Drift Tube (DT) chambers are presented.

Firstly, the Read Out Boards (ROB), responsible for time digitalization of the signals generated by a charged particle track. Secondly, the Read Out Server (ROS) boards receive data from 25 ROB channels through a 240 Mbps copper link and perform data merging for further transmission through a 800 Mbps optical link. Finally, the Detector Dependent Unit (DDU) boards merge data from 12 ROS to build an event fragment and send it to the global CMS DAQ through an S-LINK64 output at 320 MBps. These boards also receive synchronization commands from the TTC system (Timing, Trigger and Control), perform errors detection on data and send a fast feedback to the TTS (Trigger Throttling System).

The functionality of these electronics has been validated in laboratory and in several test-beams, including an exercise integrated with a fraction of the whole CMS detector and electronics that demonstrated proper operation and integration within the final CMS framework.

I. I

NTRODUCTION

CMS (Compact Muon Solenoid) is one of the general purpose detectors that are being installed at the LHC (Large Hadron Collider), the new proton-proton collider currently being built at CERN where both unprecedented high luminosity (1034 cm-2 s-1) and energy (14TeV) frontiers will be reached.

One of the design goals of CMS is muon detection and track reconstruction, which is done by means of three different technologies: Cathode Strip Chambers (CSCs) in the endcaps, Resistive Plate Chambers (RPCs) in the endcaps and in the barrel and Drift Tube chambers (DTs) in the barrel. [1]

CMS consists of a large superconductor solenoid which generates a magnetic field of 4 T. Drift Tube chambers are located outside the coil, integrated in the five wheels of the iron return yoke of the magnet. Each wheel is organized in four concentric rings around the beam line, with 12 sectors per ring each hosting four chambers as can be seen in figure 1. Since sectors 4 and 10 contain two chambers in the external layer, the total amount of chambers is 250.

DT chambers are responsible for muon identification and precise momentum measurement. They are made of three superlayers, the two outer ones to perform muon φ coordinate

measurement, while the inner measures θ coordinate. Each superlayer consists on four layers of drift cells, staggered by half a tube width.

Figure 1: Transversal view of the CMS detector.

Any charged particle going through a cell volume will generate a signal in its anodic wire that will be shaped by the front-end system before being injected in the read-out electronics. As the drift velocity (~ 55 µm/s) is basically constant in the cell volume, the position of the charged particle is related to the time measurement. As the single wire resolution is around 200 µm, a time resolution of 1 ns is more than enough to provide a precise position measurement.

A. Read-Out electronics requirements

The goal of the DT Read-Out electronics is the time digitization of the chamber signals with respect to the Level 1 Accept (L1A) trigger and the consequent data merging for reading out the whole detector at 100 kHz L1A rate.

L1A latency will be around of 3.2 µs; therefore, memories in the system must have sufficient capacity to store hits until matching can be performed. Since the maximum drift time (400 ns) is much larger than the LHC bunch crossing period (25 ns), a read-out system that manages overlapping triggers is required. Moreover, the system has to process 172,000 channels, and several tasks of multiplexing and data reduction have to take place to read-out each wheel at maximum data rate of 320 Mbps.

(2)

The read-out electronics inside the CMS cavern is also required to operate over 10 years of limited maintenance under the CMS environmental conditions, as a magnetic field of 0,08 T and a remnant radiation resulting from the products of successive interactions. This forces the employment of radiation tolerant devices that must be tested to operate under a neutron fluence of 1010 cm-2 over 10 years, a charged particle flux of 10 cm-2s-1 and an integrated dose of 1 Gy.

II. ROB:

R

EAD

O

UT

B

OARDS

The Read-Out boards (ROB) developed at CIEMAT are built around the HPTDC (High Performance Time to Digital Converter) ASIC developed by the CERN EP/MIC group. [2]

This highly programmable TDC is based on the Delay Locked Loop (DLL) principle, providing a time bin of 0.78 ns, i.e., 265 ps resolution, when it is clocked at the LHC 40.08 MHz frequency. Each ROB board has four HPTDCs connected in a clock synchronous token ring scheme with bypassing mechanism implemented. Therefore, it digitizes up to 128 channels. Crosstalk measurements in the boards show deviations below 350 ps.

An Altera CPLD manages the read-out protocol for transmission to a 240 Mbps serializer. Data is sent through a 30 meters copper link to the next level of the read-out system, the Read-Out Server boards (ROS). The reliability of this link has been measured and the tests showed less than 10-15 bit error rate.

Read-out is performed at an effective bandwidth of 200 Mbps, with an average throughput of 16 Mbps. This throughput value is well below the ROB-ROS link bandwidth.

On the Altera CPLD registers, a triple redundancy mechanism that solves 1 bit upsets has been implemented. This CPLD also manages the Test-Pulse mechanism at the ROB which is a special operation mode that will be used to test and calibrate all the DT electronics chain by emulation of artificial vertical tracks.

Other features of the ROB include power supply, current and temperature monitoring and also a power supply protection circuitry with fast shut off capability to avoid over-currents. These over-currents may be generated not only by electrical failures, but also by radiation induced errors. A picture of the ROB board can be seen in figure 2.

Figure 2: Read-Out Board.

A. Validation tests

Since radiation hard devices are not going to be employed, irradiation tests were performed on the ROBs at the Cyclotron

Research Centre at the University of Louvain (UCL) with a total fluence of 5 1010 cm-1 protons at 60 MeV.

The results indicate that only single event upsets may occur, with an estimated mean time between failures of ~3 days for all the HPTDCs in the whole detector. No other significant effect was observed.

Besides, temperature cycling tests in an environmental chamber between 0ºC and 70ºC showed deviations in the time measurement of around 40 ps/ºC and small variations in the voltages and currents.

Moreover, lifetime tests have been performed to a fully operating ROB at 105ºC ambient temperature to find out failure mechanisms in an accelerated stress test. During 4 months of operation no failure was found, which will mean a worst case failure rate below 1 per ROB during 10 years of operation. This failure rate has been calculated considering a low activation energy failure mechanism, such as solder bonding (0.4 eV) [3].

Finally, burn-in tests have been performed to all the 1500 ROBs in order to discard devices with infant mortality.

ROBs have also been operated in various tests beams at CERN (October 2001, May 2003 and October 2004) [4][5][6] connected to a fully dressed chamber operated under a 25 ns structured beam. It was showed that the ROB can stand high hit rates, including noisy channels, and that this only affect a group of 8 channels.

B. The Minicrate

The ROBs are located inside the so-called Minicrate [7], attached to the DT chambers. The Minicrate is an aluminium profile with large number of support structures to provide sound construction and thermal conduction for refrigeration of the electronics through a water cooling system.

In the Minicrate, the ROBs are integrated together with the muon trigger electronics (Trigger Boards, TRB´s and Server Board, SB) and the Chamber Control Board (CCB). They share, among others, power supplies, water cooling, wire chamber signals and the TTC signals (LHC clock, L1A, bunch reset, etc.) [8]

Configuration and monitoring of the ROB is performed through a JTAG interface over a common bus handled by the CCB. Chip internal errors, as well as buffer overflows, are reported to the CCB like other system faults, and they are also notified within the read-out data flow.

(3)

III. ROS:

R

EAD

O

UT

S

ERVER

The second level of the DT data acquisition system is the Read-Out Server board (ROS). It is a 9U VME board developed at CIEMAT that reads 25 ROBs each, i.e., one sector. A view of the ROS board can be found the following figure.

Figure 4: Read-Out Server (ROS) board.

It performs storing and data merging of the digital input data at 100 kHz L1A for further transmission to the DDU board. ROS performs main tasks of data quality monitoring, scanning for errors and guaranteeing data integrity. It also takes care of event synchronization with the TTC system.

ROS boards are located in crates inside the cavern, on the towers on one side of the CMS wheels. There are two crates per wheel, where ROSs are integrated with the Trigger Sector Collector (TSC) boards, sharing the TTC signals that are distributed by the TIM board (TTC Interface Module) through a custom backplane. The TIM board, based on the TTCrq mezzanine, has also been developed at CIEMAT.

For debugging purposes of the drift tubes trigger system, trigger data from the contiguous TSC board is also read by the ROS and included in the data flow as a 26th channel.

The 25 input channels are grouped in four blocks, each one managed by a so called CEROS FPGA (Fig. 5). Input data are deserialized and stored in 4 kbytes input FIFOs.

Figure 5: Diagram of the ROS architecture.

In normal operation mode, event and bunch information is received from the TTC system and stored in an internal FIFO to allow handling of overlapping triggers. At each CEROS block, data is processed in parallel to speed up the read-out

and then transmitted to a common bus following a star token protocol.

Data consistency, transmission errors and event synchronization are checked at each CEROS and any integrity error is notified not only in the status registers but also within the data flow. Moreover, only valid information or error status is transmitted to further levels of the read-out chain, creating a data reduction to achieve 8 kbytes per event in the whole detector.

ROS processing time is around 300 clock cycles for an average muon event. This implies that read-out could be done at a fixed L1A rate of 134 KHz considering one muon per sector, which is much more than expected at LHC.

Data is serialized by the GOL ASIC [9] and sent through a VCSEL optical transmitter at 850 nm through 60 meters optical fibers to the DDU. The estimated throughput is 80 Mbps, while the transmission bandwidth is effectively 640 Mbps.

Other operation modes have also been implemented at the ROS, such as a spy mode where snapshots of several events are stored in a 512 kB memory for further reading through VME. This operation mode can be done in parallel with normal acquisition, without interfering.

For debugging purposes, input FIFOs can also be read directly through VME, and a transmission debugging mode has also been implemented, where data is written in the internal memory and sent at a selectable bandwidth to the DDU.

ROS also has current, voltage and temperature sensors on board and an over-current protection system. Configuration and status monitoring is performed through a VME interface, that also allows remote FPGA re-programming.

A. Validation tests

ROS boards have been extensively tested at laboratory at CIEMAT and also integrated with other parts of the subsystem at Torino, Bologna and Legnaro, where they have been tested with the TSC, the DDU boards and the final XDAQ software while reading two DT chambers.

Irradiation tests have also been performed at UCL with 5 1010cm-2s-1 protons at 60 MeV. Mean time between failures for single event upsets are around 2 to 17 days for the different devices.

Some 8 channels ROS prototypes have also been operated at different test beams [6]. Last summer, the final ROS was successfully operated integrated with a fraction of the CMS detector for around 170 hours with no problems found and no effect with 4 Tesla magnetic field in the CMS magnet.

High rate tests where also performed with 1 sector, simulating muon data at ROB level and reading out the ROS at a random L1A of 100 kHz, LHC like.

IV. DDU:

D

ETECTOR

D

EPENDENT

U

NIT

The DDU board is a VME64x 9U module developed by the INFN Torino. It collects digital data from 12 ROS boards through optical fibers. The whole DDU system is located in the CMS service cavern and consists of 5 modules.

(4)

The main task of the board is to send a consistent event fragment to the central CMS DAQ every L1A trigger. In order to do that, it merges data from 12 ROS, checks data synchronization and error codes included in the data flow and keeps the synchronization with the whole detector through the TTC system. A detailed description of the board is given following the block diagram (Fig. 6).

Figure 6: DDU block diagram

Each input channel of the DDU consists of an Agilent HFBR-5710 laser receiver and a Texas Instruments TLK-1501 deserializer. Deserializers are connected through 16-bit buses to 3 Input FPGAs (Xilinx XCV2P40-6 FG676), each one receiving data from 4 input channels. Inside the input FPGAs data format is converted from 16 to 64-bit and one data packet is stored in an internal FIFO (8kx64-bit) every L1A trigger. In addition it performs data error detection, checking data consistency and possible transmission errors. The status of each data packet is saved in another FIFO (1k x 8-bit), to share this information with the Main FPGA (Fig. 7). The 4 channels inside each Input FPGA work in parallel.

Figure 7: Input FPGA block diagram.

The Main FPGA (Xilinx XCV2P40-6 FG676) scans, every L1A trigger, all the enabled input FIFOs to readout stored data and build the event fragment to be sent to the central DAQ. The FPGAs exchange data through a 64-bit bus working at 40 MHz. The Main FPGA (Fig. 8) also checks for errors and keeps the synchronization with the whole CMS system, by mean of data commands received from the TTCrx.

Figure 8: Main FPGA block diagram.

Data are packed according to the CMS common format and stored in an external output FIFO (IDT 72V72100, 64k x 72-bit), readable via the S-LINK64 at 320 MBps [10] or through the VME bus. The Main FPGA also contains a Spy FIFO (64k x 32-bit) readable from the VME bus in parallel with normal data acquisition, and an internal data generator, both used for debug and test operations. In order to provide a fast feedback to the central trigger system, the Main FPGA sends its status to the TTS (Trigger Throttling System [11]), via a LVDS output port. The TTS output is used to slow down the trigger rate in case of imminent buffer overflow, or to report synchronization and hardware problems in the DDU board or in the rest of the DT Read-Out electronics.

As mentioned before, the DDU has one more optical input to receive diverse information from the TTC system: L1A trigger signal, Event and Bunch numbers, the 40.08 MHz and 80.16 MHz clocks and other synchronization commands. The two system clocks are crucial for the correct operation of the whole board (especially the deserializers, which require a jitter clock lower than 40 ps peak-to-peak). So they are distributed using the QPLL chip ([12]) and differential buffers adding very small portions of jitter (LVPECL buffers by ON-Semiconductor). The whole printed circuit board, which has 12 layers, is optimized to guarantee fast signals and low jitter clock distribution.

The VME interface, implemented in a FPGA (Xilinx XC2S200-6FG456C), is used for the board configuration, monitor and data sampling. Finally, the board provides a JTAG access for boundary scan operations and FPGA programming.

(5)

A. Validation tests

The functionality of the board has been tested in laboratory, focusing on critical parts, as the clock jitter, serial communications and the internal 64-bit bus connecting the FPGAs.

The deserializer´s clock signal, as showed in Fig. 9 has a jitter with a peak-to-peak value lower than 20 ps (about 2.3 ps rms), when the limit given by the Texas Instrument is 40 ps.

Figure 9: Clock jitter measurement, done with a 12 GHz scope (40 GSa/s).

The internal 64-bit bus has been tested with a Pseudo Random Bit Generator, implemented with a 64-bit LFSR (Linear Feedback Shift Register) in the 3 input FPGAs and in the Main FPGA. After selecting one of the 3 input FPGA to drive the bus (via VME), at each clock rising edge a 64-bit word is generated in the Input FPGA and sent to the Main FPGA. The Main FPGA compares the received data with the one internally generated, and reports errors in registers. The test has been performed for several hours with both 40.08 MHz and 80.16 MHz clock signals and no errors have been found (Fig. 10).

Figure 10: Eye diagram of one bit of the 64-bit internal bus. The bus was driven by the Pseudo Random Bit Generator. Serial resistor terminations (39 ohm) were used inside the FPGA to match

the impedance [13].

The board has been fully tested with the other components of the DT Read-Out and the CMS central DAQ, reaching a total data bandwidth greater than 200 MBps at the nominal L1A rate of 100 KHz.

The DDU has also been fully validated integrated with the rest of the electronics during the exercise performed at CERN during summer 2006. At present, the final version of the board is under production, and the first units are already installed in the CMS counting room, in order to be ready for the CMS start-up.

V. C

ONCLUSIONS

After several years of design, prototyping and production, the complete DT read-out system is almost ready for operation in the CMS detector. Various prototypes of the different parts have been developed and tested in laboratory where chambers and electronics have been debugged and optimized.

The functionality of the read-out electronics has been satisfactorily tested in different test beams with 25 ns structured beams where it has been operated with the rest of the DT detector and the design has been thoroughly validated. From August to November 2006 the whole DT read-out chain of three DT sectors was successfully operated with a magnetic field of 4 Tesla in the CMS magnet. More than 170 hours of data taking and 200 million collected events have validated the good functionality of the read-out system.

The different parts have also shown adequate behaviour under the environmental conditions of magnetic field and radiation expected during the 10 years of operation of CMS in LHC.

Production of the read-out electronics at CIEMAT and INFN Torino has comprised 1500 ROBs, 250 Minicrates, 60 ROS boards, 10 TIM boards and 5 DDU boards. Most of this electronics has already been installed at CMS and commissioning is taking place at present in order to be ready for CMS start-up in 2008.

VI. R

EFERENCES

[1] CMS Collaboration. (1997, Dec.) “CMS. The Muon Project. Technical Design Report”. CERN/LHC 97-32.

[2]. http://micdigital.web.cern.ch/micdigital/hptdc.htm. [3] “Components Quality and Reliability. 1991/92” Intel Books.

[4] C. Albajar et al, “Test Beam Analysis of the First CMS Drift Tube Muon Chamber,” NIM A Vol 525 (2004).

[5] P. Arce et al, “Bunched Beam Test of the CMS Drift Tubes Local Muon Trigger,” NIM A Vol 534 (2004)

[6] M. Botenackels et al. “Results of the First Integration Test of the CMS Drift Tubes Muon Trigger,” CMS NOTE 2006/072. May 2006.

[7] C. F. Bedoya et al. ”Electronics for the CMS Muon Drift Tube Chambers: the Read-Out Minicrate.” IEEE TNS Vol 52 NO. 4 pp 944-949. August 2005.

[8] http://ttc.web.cern.ch/TTC/intro.html

[9] http://proj-gol.web.cern.ch/proj-gol/gol_manual.pdf

[10] http://hsi.web.cern.ch/HSI/s-link, 2003

[11] CMS Trigger/DAQ Group, CMS Note 2002-033 [12] http://proj-qpll.web.cern.ch/proj-qpll/

Riferimenti

Documenti correlati

The internal architecture and mechanical properties of subducted material, the thickness of a subduction plate interface (or megathrust fault zone), the homogeneity

I podociti originano dalle cellule dell’epitelio colonnare e, all’interno del glomerulo, con i loro pedicelli danno vita a una struttura arborizzata connessa alla membrana

formed on the nanotube surface in the presence of a cadmium vacancy. The minus sign indicates henceforth a contraction of bond-lengths as opposite to an expansion indicated with

 No restructuring is required: JSON's structures look No restructuring is required: JSON's structures look like conventional programming language structures.. like

Changing the sampling time of the signals is equivalent to a modification of the t 0 used in the track fit and therefore it is evident that the trigger efficiency and the bunch

 “A big data architecture is designed to handle the ingestion, processing, and analysis of data that is too large or complex for traditional

During our tests we measured both the data distribution latency (time needed for an event to travel from the publisher to the subscriber, i.e. half Round Trip Time) and

The results of the densitometric analysis (ImageJ software) of OsFK1 or OsFK2 western blot bands of embryo, coleoptile and root from seedlings sowed under anoxia or under