TRUST AI
Transparent, Reliable and Unbiased Smart Tool for AI Artificial IntelligenceSystems Engineering and Management
Due to their black-box nature, existing artificial intelligence (AI) models are difficult to interpret, and hence trust. Practical, real-world solutions to this issue cannot come only from the computer science world. The EU-funded TRUST-AI project is involving human intelligence in the discovery process. It will employ 'explainable-by-design' symbolic models and learning algorithms and adopt a human-centric, 'guided empirical' learning process that integrates cognition. The project will design TRUST, a trustworthy and collaborative AI platform, ensure its adequacy to tackle predictive and prescriptive problems and create an innovation ecosystem in which academics and companies can work independently or together.
Scientific Advances
The project envisages advances in the state of the art on four main fronts:
1) More efficient symbolic model learning algorithms (program synthesis / genetic programming) that push the Pareto frontier to new heights in terms of the balance between performance and simplicity;
2) New interfaces that facilitate the integration of humans into the process of guided learning of symbolic models;
3) Algorithms for searching for explanatory models, such as counterfactuals, which make it possible to gain new insights during the learning process;
4) Solving concrete problems (such as predicting the evolution of tumours, selecting delivery windows in online commerce, and predicting energy consumption in buildings), which also lead to innovations in AI algorithms and models.
Results
The project has achieved important results on all fronts:
1) New algorithms have been developed, consisting of extensions of GP-GOMEA and MS-GP, which generate simple and effective models, using improvements such as simultaneous learning of multiple (complementary) models, joint optimisation of constants, extension to regression problems, and learning of function classes;
2) A general and flexible platform has been developed that truly supports guided learning, allowing you to run any AI algorithm (with a focus on symbolic models), visualise the training of algorithms, interact with models, perform sensitivity analysis and experimentation, and find counterfactual explanations (all these functions can be called via UI or API, and can be easily updated or replaced);
3) A new counterfactual learning model has been developed, taking into account important properties such as simplicity, coherence, completeness, feasibility and the possibility of formalising assumptions;
4) Improvement of tumour prediction, using the learning of "multi-tree" models and function classes (for each type of patient), prediction of the probability of choice and cost of delivery windows, and comparison of GP and LSTM models in the prediction of energy consumption.
Scientific Advances
The project envisages advances in the state of the art on four main fronts:
1) More efficient symbolic model learning algorithms (program synthesis / genetic programming) that push the Pareto frontier to new heights in terms of the balance between performance and simplicity;
2) New interfaces that facilitate the integration of humans into the process of guided learning of symbolic models;
3) Algorithms for searching for explanatory models, such as counterfactuals, which make it possible to gain new insights during the learning process;
4) Solving concrete problems (such as predicting the evolution of tumours, selecting delivery windows in online commerce, and predicting energy consumption in buildings), which also lead to innovations in AI algorithms and models.
Results
The project has achieved important results on all fronts:
1) New algorithms have been developed, consisting of extensions of GP-GOMEA and MS-GP, which generate simple and effective models, using improvements such as simultaneous learning of multiple (complementary) models, joint optimisation of constants, extension to regression problems, and learning of function classes;
2) A general and flexible platform has been developed that truly supports guided learning, allowing you to run any AI algorithm (with a focus on symbolic models), visualise the training of algorithms, interact with models, perform sensitivity analysis and experimentation, and find counterfactual explanations (all these functions can be called via UI or API, and can be easily updated or replaced);
3) A new counterfactual learning model has been developed, taking into account important properties such as simplicity, coherence, completeness, feasibility and the possibility of formalising assumptions;
4) Improvement of tumour prediction, using the learning of "multi-tree" models and function classes (for each type of patient), prediction of the probability of choice and cost of delivery windows, and comparison of GP and LSTM models in the prediction of energy consumption.