Share

Centre for Educational Research and Innovation - CERI

Future of Skills - Events

 

Back to Future of Skills

Events

Planning meeting

Washington D.C., 27 March 2019

CERI held a joint planning meeting on 27 March with the U.S. National Academies of Sciences, Engineering and Medicine, and the InterAcademy Partnership, a global network of over 130 academies in science, engineering and medicine.

The objective of the meeting was to develop a shared understanding of:

  • the overall goal of assessing AI and robotics capabilities
  • the approach for carrying out the assessment and
  • the roles for the three institutions. 

 

Expert meeting – Skills and tests

Virtual, 5-6 October 2020

The Future of Skills project held an online expert meeting on 5-6 October 2020 with experts from various domains of psychology and computer science. The purpose of the meeting was to review existing taxonomies of human skills and tests of those skills that could be used in the project to assess artificial intelligence (AI) and robotics capabilities and compare them with human capabilities across the full range of human skills used at work. The meeting compared the different types of skill taxonomies and tests that have been developed for different human contexts, including education, work, cognitive research, social and personality psychology, and understanding basic skills in young children, animals, and adults with neuropsychological impairments. The goal of this comparison across taxonomies and tests is to work towards a consensus on the skill taxonomies and types of tests to use in the project. The discussion at the meeting worked towards some broadly-supported guidelines to govern the project’s choice of a skill taxonomy and a set of tests.

 

Expert meeting – Methodology

Virtual, 4 March 2021

The Future of Skills project held an online expert meeting on 4 March 2021 to discuss the challenges and solutions for gathering expert judgement on AI and robotics capabilities using test items. The meeting brought together leading researchers in quantitative and qualitative methodologies, and different domains of psychology and computer science. Participants discussed key questions around the construction of tasks and instructions for experts for the assessment process. They reviewed and discussed procedures, including identifying, sampling and training experts, and various approaches to eliciting their judgements. Finally, the group reflected on the validity of expert judgement and provided guidelines for the analysis of their assessments. Overall, the meeting tackled a number of methodological challenges and provided valuable guidelines for developing the methodology for the exploratory assessments that will be conducted in the next phase of the project.

 

Expert meeting – AI benchmarks and competitions

Virtual, 5 July 2021

The Future of Skills project held an online expert meeting on 5 July 2021 with a core group of experts from computer science and psychology. The purpose of the meeting was to discuss approaches to inventory and categorise existing empirical evaluations of AI and robotics systems. The meeting reviewed a range of empirical evaluations, benchmarks, and competitions carried out in the United States and some European countries with a particular focus on the domains of tasks and capabilities covered by these evaluations, as well as a set of descriptors to help identify the evaluations that are more robust. The goal of this review is to produce detailed descriptions of the tasks and capabilities that have and have not been evaluated for AI and robotics systems. The discussion at the meeting worked towards some broadly-supported guidelines on how to integrate these initial inventories of existing evaluations, benchmarks and competitions for the purpose of the project.

 

Expert meeting – Framework

Virtual, 26 October 2021

The AI and the Future of Skills project held an online expert meeting on 26 October 2021 with a group of computer science experts. The purpose of this meeting was to establish an approach to inventory and categorise existing empirical evaluations of AI and robotics capabilities. Specifically, experts defined quality criteria for selecting good benchmarks, tests and competitions, so-called Evaluation Instruments (EIs), among the large variety of existing ones. In addition, experts selected a number of relevant EIs to test the framework with. Good-quality EIs should improve the validity, reliability and fairness of the testing of AI capabilities. Such measures may also be used to assess isolated AI skills, reserving expert judgement for more complex education and work tasks involving combinations of skills.

 

Related Documents