Designing efficient many-core computer systems in terms of performance,
energy, size, reliability, cost, etc. becomes intolerably
complex, ad-hoc, costly and error prone due to limitations
of available technology, enormous number of available design and
optimization choices, and complex interactions between all
software and hardware components.
Worse, unlike other mature sciences, computer engineering lacks
common research and experimentation methodology as well
as unified mechanisms for knowledge building and exchange apart
from publications where reproducibility of results is often not
After completely switching my research to computer engineering in 1998,
I have been working on preparing foundations for a collaborative,
systematic and reproducible research and experimental methodology
and publication model where experimental results and all related
material (code,data and experimental workflows) is continuously
shared, discussed, validated and improved by the community.
Since it was extremely difficult to persuade the community in the
importance of such approach, I started setting up an example
sharing all my past research artifacts including hundreds of
benchmarks, kernels, numerical applications, data sets, predictive models,
universal experimental analysis and auto-tuning pipelines, self-tuning machine learning
based meta compiler, and unified statistical analysis and machine learning plugins
along with publications using my public frameworks (cTuning V1,
cTuning V3 aka cM,
cTuning V4 aka CK)
and live CK-powered repository of knowledge.
You can find further info about my long-term vision and foundation of collaborative
and reproducible analysis, optimization and co-design of computer systems in
the following publications [CPC'15,
GCC Summit'09] and on
Since 2006, my methodology and infrastructure has been used in multiple academic
and industrial projects and lectures:
I continue improving my technology and methodology for
collaborative and reproducible experimentation. Hence, if your
organization is interested in systematizing ad-hoc research and
experimentation and connecting it to "big data" predictive
analytics, or interested in guest
lectures, talks and tutorials about my research
and open-source cTuning
technology, or interested to establish common projects
and possibly interdisciplinary labs, do not hesitate to get
in touch. At the moment, I often commute to UK/USA,
and hence primerily interested in opportunities there,
but open to other interesting possibilities too.
- In 2006-2009, my cTuning technology was used and extended
in the EU FP6 MILEPOST project to enable machine-learning based
self-tuning compiler considered by IBM to be the first
in the world.
- In 2007-2010, I was a guest lecturer at
the University of Paris-Sud (France),
preparing and teaching my own advanced MS course on machine-learning
based autotuning and run-time adaptation [Lecture 1,
- In 2010-2011, I helped
Intel establish Exascale Lab
in France based on cTuning technology (I developed the concept
of the 2nd cTuning version called Codelet Tuning Infrastructure).
- Since 2013, I collaborated with ARM to develop the 4th
generation of the BSD-licensed open-source cTuning technology aka Collective Knowledge Framework and Repository.
- In 2013, I considerably updated cTuning technology (Collective Mind) and gave two guest lectures
at the National Taiwan University:
- In 2014, I reused my cTuning experience and together with Bruce Childers initiated
artifact evaluation for PPoPP
and CGO conferences. We continue
this initiative while continuously improving procedures to share and review artifacts
backed up by ACM.
- In 2014, Fujitsu made a press-release about their
long-term Exascale initiative on "big data" driven optimizatoin
mentioning my cTuning technology
as one of the motivators.
- In 2015, I gave a guest lecture about my latest Collective Knowledge
technology at the University of Copenhagen, Denmark.