Reorganise the data storage
|Reported by:||tlatham||Owned by:||tlatham|
At present the data is read in via a TTree in LauFitDataTree. Then when all the PDFs are asked to cache information they make their own local copies of their abscissas as well as the likelihood value or some intermediate calculated values. In fits with very large data samples this could cause memory problems.
Better to have only one copy of the data in memory and give the PDFs references to their abscissa values. They can still store locally their likelihood values and/or intermediate values.
In the case of extremely large statistics even this might break down so think about what to do then. Perhaps just use the TTree (with tweaked buffering options)?