Home Research Reproducibility Teaching CV Hobbies Social activities Contacts
I develop a novel methodology and open-source technology to collaboratively optimize deep learning powered by CK across the whole SW/HW stack including DNN engines, libraries, models and data sets across diverse devices from IoT to supercomputers!
News:
  •  2017.February: Our CGO'07 research paper received "test of time" award!
  •  2017.March: We partnered with General Motors and released a unique, portable and customizable open-source technology powered by CK to optimize deep learning at all levels across diverse and ever changing HW/SW stack from IoT to supercomputers!   •  2017.February: We discussed how to improve future Artifact Evaluation at joint CGO-PPoPP'17 AE panel (Monday, 17:15-17:45, Austin, TX, USA)
  •  2017.February: The distinguished artifact at the CGO'17 was implemented using our CK framework - see it at GitHub!
  •  2017.February: We co-authored ACM's policy on Result and Artifact Review and Badging and prepared Artifact Appendices now used at SuperComputing'17!
  •  2016.December: We released new version of our open-source Android application to crowdsource benchmarking and optimization of various DNN libraries and models (Dec.27) [ grab it at Google Play; get sources from GitHub; see crowd-results (scenario "crowd-benchmark DNN libraries") ]
  •  2016.October: We presented our collaborative approach to workloads benchmarking at ARM TechCon'16 (Oct.27, Santa Clara, USA)
  •  2016.October: We updated list of CK-powered open R&D challenges in computer engineering
  •  2016.June: Congratulations to Abdul Memon (my last PhD student) for successfully defending his thesis "Crowdtuning: Towards Practical and Reproducible Auto-tuning via Crowdsourcing and Predictive Analytics" in the University of Paris-Saclay. Most of the software, data sets and experiments are not only reproducible but also shared as reusable and extensible components via Collective Mind and CK!
  •  2016.May: Thanks to a one-year grant from Microsoft, we moved Collective Knowledge Repository to Azure cloud!
  •  Recent publications with my long-term vision: [DATE'16 (with artifacts), CPC'15 (with artifacts), Scientific Programming'14 (with artifacts), TRUST@PLDI'14].

[ News archive ]

My name is Grigori Fursin. I am the CTO of dividiti, Chief Scientist of the cTuning Foundation (non-profit research organization) and the reproducible research evangelist. I have an interdisciplinary background in computer engineering, physics, neural networks, electronics and machine learning with a PhD in computer science from the University of Edinburgh.

I have more than 20 years of R&D experience directing research at Intel Exascale Lab, University of Edinburgh and INRIA, while closely collaborating with GM, Google, Intel, IBM and ARM. During that time, I prepared foundations and open-source research SDK (Collective Knowledge) to enable practical self-optimizing software and hardware (from cloud servers and supercomputers to deep learning on mobile devices and IoT).

Such systems continuously autotune and test themselves, exchange design and optimization knowledge via CK live repo or P2P, and use machine learning to automatically adapt SW and HW at all levels to perform any given computation with a given data set in a most efficient way in terms of execution time, energy usage, memory footprint, accuracy, resiliency, HW price and other associated costs and resources (CPC'15 vision paper, IJPP'11).

I now work closely with the community, leading universities and companies to use my collaborative design and optimization methodology, open-source tools, public repository of knowledge and CK-powered open challenges and optimization competitions to crowdsource benchmarking, multi-objective optimization and co-design of AI and other complex applications across diverse hardware and data sets (see CK-powered DNN crowd-tuning and related Android app) while dramatically accelerating knowledge discovery, reducing time to market for the new technology, and boosting innovation in science and technology.

After sharing all my artifacts and promoting collaborative R&D since 2007, I am now glad and honored that my community-driven optimization approach helped motivate open, collaborative and reproducible computer systems' research and experimentation, initiate Artifact Evaluation at the premier ACM conferences on parallel programming, architecture and code generation (PPoPP,CGO,PACT,RTSS,SC), and encourage fellow researchers preserve, organize, cross-link and share their code and data in a unified, reusable, customizable and reproducible way (see CGO'17 artifacts and workflow shared in the CK format).

My techniques were published in PLDI,MICRO,IJPP,CGO,TACO,CASES and DATE, received CGO "test of time award", helped enable world's first machine learning based self-tuning compiler (MILEPOST GCC) and establish cTuning foundation (non-profit R&D organization), were included to mainline GCC, influenced Fujitsu, IBM and ARM (p.17), received INRIA outstanding research award, and are being commercialized by dividiti startup to collaboratively optimize HPC libraries and complex DNN algorithms across diverse hardware and data sets.

In a longer term I am interested in using my interdisciplinary knowledge and experience to continue leading challenging and innovative projects related to AI, brain-inspired computing, bio-informatics, space exploration, medicine, big data predictive analytics, Internet of Things and Exascale computing. When I have a bit of spare time, I prefer to spend it with my lovely family, as well as traveling, reading, discovering new cultures, playing football (soccer), skiing, swimming, snorkeling, climbing and jogging.

You can find further details about my R&D including short bio in my interactive CV (powered by Collective Knowledge).

CV shortcuts: Talks (T); Publications (P); Institution building (I); Keynotes (K); Organizing/chairing events (E); Social activities (O); Startups (C); Examiner; Expert service (E); Major research achievements (M); Public or in-house repositories of knowledge (R); Awards, prizes, and fellowships (A); Major funding (F); Professional experience (J) ; Education (Z) ; Major software and datasets (S); Hardware (H); Participating in program committees and reviewing; Teaching and organizing courses (L); Advising/collaborating (Q).

Languages: English - fluent (British citizen); Russian - native; French (spoken) - intermediate
Address: I currently live in Paris suburbs and regularly commute to UK and USA where I have my main industrial and academic projects.
Professional Career:
Education:
  • 2004: PhD in computer science with ORS award from the University of Edinburgh, UK.
  • 1999: MS in computer engineering with golden medal (summa cum laude) from Moscow Insitute of Physics and Technology, Russia.
  • 1997: BS in electronics, mathematics and machine learning (summa cum laude) from Moscow Institute of Physics and Technology, Russia.
Academic partners: Imperial College London (UK), University of Manchester (UK), University of Pittsburgh (USA), University of Edinburgh (UK), Cambridge University (UK), University of Copenhagen (Denmark), UCAR (USA), INRIA (France), ENS Paris (France).
Main achievements:
  • 2017: Received CGO test of time award for the research on machine-learning based optimization.
  • 2012-2016: Received INRIA award and fellowship for "making an outstanding contribution to research".
  • 2014-2015: Received EU TETRACOM grant to develop 4th version of a univeral machine-learning based autotuning framework and public repository for artifact sharing (Collective Knowledge).
  • 2012-2014: Developed 3rd version of a universal machine-learning based pluginized autotuning framework supporting multiple objectives including performance,energy,size and cost for a variety of kernels, codelets and large applications with OpenCL, CUDA, OpenMP, and MPI.
  • 2014-cur.: Initiated Artifact Evaluation for CGO, PPoPP, ADAPT and PACT (follow-up to my initiative on collaborative and reproducible research).
  • 2008-cur.: Established cTuning.org community-driven portal and non-profit foundation to start sharing artifacts along with publications while reusing them to crowdsource software/hardware optimization and combine it with machine learning.
  • 2007-cur.: Transferred developed technology to industry and production tools such as mainline GCC; consulted major companies on systematic and reproducible program and architecture performance tuning, run-time adaptation and co-design.
  • 2007-2010: Prepared and tought guest MS course on machine learning based optimization and run-time adaptation at the University of Paris-Sud, France.
  • 2006-2009: Led research and development of the machine-learning based self-tuning compiler (proposed to crowdsource plugin-based autotuning and combine it with predictive analytics and collective intelligence) in EU FP6 MILEPOST project considered by IBM to be the first in the world.
  • 1999-2000: Led research and development of a polyhedral source-to-source compiler together with collaborative plugin-based autotuning infrastructure and repository for memory hierarchy optimization in supercomputers within the EU MHAOTEU project.
  • 1999-2006: Prepared foundations for big data driven and machine learning based optimization, run-time adaptation and co-design of computer systems.
  • 1998-cur.: Started designing infrastructure and repository for crowdsourcing experiments and sharing results (code, data, models, interactive graphs) in a reproducible way among colleagues and workgroups.
  • 1993-1998: Designed novel semiconductor neural network accelerators for a possible brain-inspired computer (served as a motivator for machine-learning based autotuning and collaborative R&D).
Main techical knowledge (continuously acquire new ones): DNN, Caffe, TensorFlow, TensorRT, BLAS, Linux, Windows, Android, Python, scikit, neural networks, decision trees, SVM, agile development, large-scale project management, APIs, GCC, LLVM, polyhedral optimizations, ARM compilers, Intel Compilers, Intel VTUNE, C, C++, Java, Fortran, Basic, GPU, OpenCL, CUDA, MPI, OpenMP, PHP, R, MySQL, FPGAs, ElasticSearch, Hadoop, Jenkins, html, apache2, mediawiki, drupal, OpenOffice, Eclipse, SVN, GIT, GIMP2, Adobe Photoshop, Visual Studio, Microsoft Office, Android Studio
Main interests and expertise: Research and development:
  • developing public framework and repository to preserve, organize, describe, cross-link, share and reuse any knowledge (code, data, experimental results)
  • developing adaptive, self-tuning computer systems that can automatically adapt all their software and hardware layers to any user task while minimizing for minimal execution time, power consumption, failures and other costs
  • developing new techniques to speed up multi-objective SW/HW optimization, dynamic adaptation and co-design using big data analytics (statistical analysis, data mining, machine learning) and crowdsourcing
  • evangelizing and enabling collaborative and reproducible research in computer engineering
  • promoting new community-driven reviewing of publications and artifacts via SlashDot, Reddit, etc
  • investigating biologically and brain-inspired systems (combining predictive analytics, neural networks, AI, physics, electronics)
Management:
  • preparing and leading challenging, long term, interdisciplinary R&D projects
  • building and leading teams of researchers and developers
Transfer to industry:
  • consulting companies on cTuning-related technology (knowledge management, reproducible experimentation, autotuning, machine learning)
  • moving technology to production design and optimization tool chains including open source GCC and LLVM compilers
  • setting up joint industrial and academic laboratories
Full academic CV: HTML; PDF
Professional memberships: ACM, HiPEAC, IEEE
LinkedIn: Link
Research twitter: Link
My favourite story about Rutherford and a student in Englishin Russian
Hobbies: traveling, discovering new cultures, gardening, active sport (football, skiing, swimming, snorkeling, climbing, jogging, ...), photography, reading
Professional e-mail: Grigori.Fursin@cTuning.org or grigori@dividiti.com
Personal e-mail: gfursin@gmail.com

Check out our:
Success stories:
  • Our CGO'07 paper received "test of time award" at CGO'17! (2017)!
  • ARM and dividiti made a press-release about my Collective Knowledge Technology [ PDF (page 17) ] (2016)!
  • I started crowd-tuning campaign, i.e. crowdsourcing GCC/LLVM tuning and combining it with active learning across diverse hardware including mobile devices and cloud services provided by volunteers using CK framework. You can see latest crowd-results in our live repository! (2016)!
  • We successfully initiated community-driven pre-reviewing and validation of publications and artifacts for workshops and conferences - see ADAPT 2016 workshop.
  • After sharing all my artifacts and promoting collaborative research since 2007, I helped initiate Artifact Evaluation for PPoPP, CGO, PACT, RTSS and SC conferences. Here is our motivation: (paper, wiki).
  • Ed Plowman (director of performance analysis strategy in ARM) suggests to contribute to Collective Knowledge and workload automation [ Slides ]!
  • I have received HiPEAC technology transfer award for my novel Collective Knowledge framework..
  • My cTuning technology was referenced by Fujitsu as closely related to their long-term initiative on "big data" driven optimization of Exascale computer systems (2014).
  • My cTuning technology demonstrated the possibility to fully automate construction of compiler optimization heuristics for multi-core reconfigurable systems using machine learning and crowdsourcing - it is considered by IBM to be the first in the world (2009)
  • I extended cTuning-based technology to develop customized "in house" repository of knowledge when helping to establish Intel Exascale Lab in France (2010-2011)
  • My plugin-based compiler interface and technique to enable autotuning and run-time adaptation for statically compiled programs was added to mainline GCC (4.6+ and 4.8+ respectively) sponsored by Google [ Details ] (2008-2010)
All success stories ]

Website is powered by CK
          
   
   
   
   
   
           Locations of visitors to this page