Home
Multiple Tau
ALV-MTCF
ALV-CORR32
Averaging
TechTalk ... ALV-MTCF

 


What is the ALV-MTCF ?

The ALV-MTCF is a fully functional digital Multiple Tau Correlator on one integrated circuit which includes all a Multiple Tau Correlator requires, shift registers, multipliers, correlation channel and monitor channel store, timing signal generation, host port interface etc. It features 8 Multiple Tau STC blocks and thus 72 correlation channels and the required monitor channels. Using the ALV-MTCF In combination with a DSP or microprocessor, a super high performance Multiple Tau Correlator can easily be realized with sampling and lag times from the sub 10 ns  regime to the hours regime.

The ALV-MTCF is based on a 0.7 µm digital CMOS process and has approximately 250.000 transistors on chip. It features system clock speeds (and thus initial STC) of up to 200 MHz  at exceptionally little power consumption and, of course, 100% real time operation capability.

Why not using FPGA instead of ASIC technology ?

Speed and logic density are usually a major problem using FPGAs. For a complex logic circuit as the ALV-MTCF, several 1000 logic cell  equivalents are required. While such FPGAs do exist nowadays, the problem of logic cell to logic cell interconnection remains and severe loss in speed in these interconnects are the usual result. All this can be  avoided using ASIC technology, since, even using Standard Cell technology instead of Semi or Full Custom designs, optimization can take place on logic item level (flip-flop, gate, buffer ...) instead of logic cell  level - more compact designs with much “shorter” and faster interconnects are the result. At least a factor of two in speed in comparison to the fastest FPGAs is easily obtainable using a 0.7 µm CMOS process (while moderne FPGAs use more complicated 0.35 µm process or even smaller processes) and clock speeds of up to 250 MHz and more are not at all impossible !

Power consumption is the other issue - not so much because of not having enough power, but for the reason that high integration FPGAs usually work on very low supply voltages and this may lead to rather high required supply currents (as an example - if a FPGA running at 200 MHz requires 4 W at 1.8V supply voltage, more than 2A supply current are needed !) and of course for thermal reasons. At clock speeds as high as 200 MHz, the power consumption of a FPGA can become a major problem, even using low supply voltage technology. Using digital ASIC technology instead and logic item based optimization, the ALV-MTCF for example requires less than 0.75 mA / MHz at 3.3 V supply voltage or just 2.5 mW / MHz. Even at clock speeds as high as 200 MHz, the maximum power consumption of the ALV-MTCF is less than 1 W at 5 V ! An additionally implemented automaticatic power down mode reduces the supply current to less than 0.75 mA or just 2.5 mW at 3.3 V in stand-by mode.

Built-in self test capabilities are the final issue - if realized in FPGAs, the expensive resources of logic cells must be used. Without having these restrictions, the ALV-MTCF has built-in additional logic which allows full functional test of the entire internal RAM store and almost the entire correlator chain.

Isn’t FPGA development much more easy and faster then ASIC development ?

Yes and no - from the general design flow, not too many differences appear. However, since ASIC prototyping is much more lengthy and  expensive then FPGA prototyping, one is much more cautious in getting correct simulator results first and tries to solve all remaining problems and bugs before the prototyping takes place. If the resulting ASIC is  then working as expected, remaining “not yet fully solved problems and bugs” are usually less probable than in FPGA designs (which usually can be prototyped in minutes to hours instead of weeks to months and less  time is spent on simulation and simulator results).

What kind of detector do I need for a sub 10 ns digital correlator,
such as the ALV-6010-160, for example ?

A cross correlation detector or a “pseudo” cross correlation detector such as the ALV / SO-SIPD-II. You will not be able to do these experiments in Auto Correlation mode - even the best available detectors (PMT, APD, MCP) are not fast and “uncorrelated” enough for sub 100 ns correlation time.

In the description of the ALV-60X0 Correlator Series, you talk about peak count rates !
What do you exactly mean ?

Every non-trivial correlation experiment shows either systematic or stochastic variation of the input “count rate” - if it doesn´t, the correlation function is not worth measuring. It is therefore important to understand the difference between instantaneous counts/sampling time (peak count rates) and  average count rates (the average of counts over times much longer than the typical signal fluctuation time). Let´s clarify this on two different experiments, DLS and FCS. In Dynamic Light Scattering, the number of  photons hitting the detector is composed of a more or less constant contribution and a highly varying contribution (although this is a somewhat simplified picture, it still holds true for this discussion). We can  thus easily define an average count rate as photons per second, for example. Given this number, we still do not have any idea about the short time signal besdies that it will not vary outside certain bounds at  reasonable probability. The to be expected variation of the number of photons detected as well depends on the detection process itself. This detection process can usually be well approximated as Poisson process. The  short time signal fluctuation is primarily a matter of the experimental conditions (particle size, scattering geometry ...). As a rule of thumb, the peak count rates must be expected as high as 10 x the mean count rate at still notable probability. This means that in any DLS experiment mean count rates much larger than 5 MHz must be avoided, because the peak count rate limitation of whatsoever single photon detectors still is  on the order of 20 MHz only and the detectors would saturate too often in this experiment - severly distorted correlation functions would be the result.

In FCS, the differences between peak and average count rate are even larger - while the average count rate per second can be as small as a few kHz only, still bursts of several MHz equivalent can very well appear, actually whenever a fluorescent marked molecule diffuses through the confocal volume. Here, it is not only important to have high peak count rate, but burst count rates of considerable value as well. These burst are in the µs ... ms regime, but are again limited to not more than 10 ... 20 MHz due to the detector capabilities. In FCS, the average count rate is of less importance and peak or burst count rates must be considered. Clearly, designing the correlator store for the  maximum peak count rate the ALV-MTCF can cope with would be a waste of expensive sillicon resources - no detector is (and probably never will be) able to deliver even only a fraction of these peak count rates without extreme distortions !

Still, to ensure the compatibility of a correlator with future single detector developments (if they ever come !?), the correlator should allow more than 5 MHz average and more  than 30 MHz peak count rate. All correlators of the ALV-60X0 Series offer peak count rates much larger than 30 MHz (150 MHz for ALV-6010/200, 120 MHz for ALV-6010/160 and 120 MHz for ALV-6000) and allow sustained  average count rates larger than 25 MHz for a DLS equivalent signal - in practice, the single photon detectors set the limit for average count rates, not the ALV-MTCF.

A Comment on other Techniques for Photon Correlation

Besides using digital correlation, other potential techniques exist for performing high  speed correlation function calculation, Time of Flight Analysis / Time Correlated Photon Counting. In principal, both methods are very similar in what they do - both transform the problem of photon correlation  function computation from a synchronous operation (as in the digital correlators) to a time measurement problem. Given two pulses, the temporal difference of their arrival at the measurement electronics can be used  to computed a photon correlation function.

Time (difference) measurement can be performed in many ways, for example using Time to Amplitude Conversion (TAC) and recording the amplitude (e.g. voltage) to  digital conversion, it can be performed using (multiple) propagation delays of electronic gates, variable delay lines etc. Typically, sub-ns temporal resolution can easily be achieved and at the first glance, these  techniques seem superior to digital correlation for ns correlation.

However, severe disadvantages appear as well - a TAC conversion requires for high temporal dynamics large A/D conversion dynamics, 16 bit or  more (which still allows no more than 1 : 65536 temporal dynamics). The typical problems of analog circuitry appear as well, thermal stability, low current/voltage measurements etc. The same holds true for propagation delays of electronic gates - the thermal stability is bad (around 0.5% / °C) and so is the precision of time measurement (unless special precautions are taken) - in no case the precision is comparable to the synchronous digital correlation technique, which is as precise as the clock oscillator - usually better than 10 ppm or 0.001%. Another potential problem is the max. count rate the TAC or TOF measurements can  work with - if A/D conversion is used at high precision, more than 2 MHz max. input count rate are very difficult to achieve - simply because the A/D conversion and data recording takes it´s time. Additionally, the  data stream from the A/D converter would be around 4 Mbyte/s which need to be either stored or pulse pairs have to be computed for obtaining the correlation function.

The digital conversion is not at all easier to implement, 65536 gates can not easily be lined-up in a sequence without destroying the original pulse shape (which would mean the measurement gets more and more inprecise with increasing time), thus other, more complicated, techniques have to be applied. The data storage / computation efford is comparable to that of the TAC technique.

But what does that mean ? It means, that to ensure reasonably linear behaviour of your measurement device with the count rate, the peak count rate must not exceed 2 MHz. For typical experiments, that is equivalent to an average count rate of less than a few hundret kHz only, for some  others, such as fluorescence correlation spectroscopy, it may even be much less than that. Performing sub-ns correlation at such low count rate is of course near to infeasible (or at least a problem of hours or even  days of measurement time), whenever accuracies comparable to “standard” photon correlation experiments in the 0.1% or better range are required.

Unless there is a desperate requirement for sub ns correlation or the experiments count rate is very low anyway, a fast digital correlator will always perform better than TAC or TOF.

At last, sub-ns correlation is as well a detector problem - if the detector output pulse jitter is on the order of a few 100 ps, meaningfull correlation functions can not be expected, not even in cross correlation mode - and it for sure is a cross correlation technique - otherwise dead times and afterpulsing will not allow such measurement anyway.

*The ALV-MTCF digital ASIC development was founded under FUSE experiment 27.313
 

[Home] [About ALV] [Contact] [What's New at ALV] [Products] [Support] [Download Area] [Seminars] [Employment] [Important] [Site-Map]