My original background and research was related to electronics, physics, semiconductor neural networks and machine learning (1993-1998) where statistical analysis and reproducibility of results was a must. However, when I switched to computer engineering in 1998 to make computer systems much more efficient (performance, power consumption, reliability) in order to proceed with my original research, I was shocked that there was no any common experimental methodology. Furthermore, it was (and still is) common to report only one number of varying characteristics (such as execution time or power consumption) and play with numbers to get the highest speedup for a publication.

Practically immediately I started working on a common methodology and framework to share experimental setups and results in a reproducible way (code, data, experimental results, predictive models, interactive articles), but had to work on that only in my spare time (there was no funding or interest for such activities at that time) and had to solve so many problems, that I spend nearly 15 years to prepare foundation and all the necessary technology to enable collaborative, systematic and reproducible R&D in computer engineering.

In 2008, I released the first public cTuning infrastructure and repository to share whole experimental setups including cross-linked code and data, to crowdsource experimentation and connect in with "big data" predictive analytics. This approach gained popularity in academia and industry, and eventually resulted in a non-profit cTuning foundation sponsored by EU TETRACOM grant (2014). I currently continue most of my public research within this foundation while actively collaborating with the community. You can find further info here:

Since 2022, I continue developing my CK and CM technology within our public MLCommons workgroup to help the community modularize ML/AI Systems and automate their benchmarking, optimization, design space exploration and deployment.