WG 12.10 - Artificial Intelligence & Cognitive Science
- Details
- Category: Working groups
- Published: 07 February 2020
Chairs
Assoc. Prof. Kai-Florian Richter, Department of Computing Science, @openlandmarks, Umeå University, Sweden
This email address is being protected from spambots. You need JavaScript enabled to view it., http://www.kfrichter.org
Vice-Chair
Dr Maria Vasardani, VC Research Fellow, RMIT - School of Science, Geospatial Science, Melbourne, 3000, VIC
This email address is being protected from spambots. You need JavaScript enabled to view it.
Aims
The working group aims at connecting (again) research in the areas of Artificial Intelligence and Cognitive Science. Specifically, the aims are
- to raise awareness for the importance of concepts and findings from Cognitive Science in AI research that address spatial and spatio-temporal issues; but likewise, to inform cognitive science researchers about relevant findings and methods from AI;
- to provide a platform for connecting researchers with similar interests and research questions, and possibly complementary skills and knowledge; thus, allowing them to tackle issues they could not address alone;
- to promote findings and research to a wider audience in order to raise awareness of the importance of cognitive aspects in developing AI systems and services operating in spatial (and spatio-temporal) domains.
Scope
The working group addresses issues of intelligent systems being deployed in our day-to-day environments. An initial focus is on spatial and spatio-temporal problems, and systems such as social robots, self-driving vehicles, smart homes, and interactive location-based services. Issues of interest include (but are not restricted to):
- interaction and communication between humans and such systems;
- cognitive effects of using such systems;
- a system’s ability to understand (and mimic) human concepts of space, spatio-temporal phenomena, and communication more generally;
- ‘explainable AI’, i.e., a system’s ability to reason about and explain their behavior;
- Exploiting principles of human reasoning, representation, and communication in the design of intelligent systems.
Members of the WG
Sven Bertel, Flensburg University of Applied Sciences, Germany
Christian Freksa, University of Bremen, Germany
Toru Ishikawa, Toyo University, Japan
Markus Kattenbeck, TU Vienna, Austria
Alexander Klippel, Penn State University, USA
Holger Schultheis, University of Bremen, Germany
Thora Tenbrink, Bangor University, UK
Sabine Timpf, University of Augsburg, Germany
Working Group on Human-Centred Artificial Intelligence
- Details
- Category: Working groups
- Published: 07 February 2020
Chair
Prof. Dr. Albrecht Schmidt, Human-Centered Ubiquitous Media, Ludwig Maximilian University of Munich, Germany
This email address is being protected from spambots. You need JavaScript enabled to view it., https://www.en.um.informatik.uni-muenchen.de/people/professors/schmidt/index.html
Vice-Chair
TBA
New IFIP Working Group on Human-Centred Artificial Intelligence
The kick-off meeting of IFIP’s new Working Group on Human-Centred Intelligent Interactive Systems took place during the IFIP TC 13 INTERACT conference held in York during the last week of August 2023. Attracting 44 people from 17 countries and organised by Professor Albrecht Schmidt from Ludwig-Maximilians-Universität, Munich, the new group is a combined initiative of WG 13.11 and WG 12.14.
Artificial Intelligence (AI) and machine learning (ML) have become key foundations for computing systems. Intelligent interactive systems directly impact humans and their relationship with data, information and smart environments. Designing and implementing such systems poses new challenges and requires new approaches in human-computer-interaction (HCI). New opportunities for the design of user interfaces and interaction metaphors arise, and concepts and models for interactive systems will change. On a more abstract level, new dimensions need to be considered, including new ethical aspects and human-centred development.
Moving towards automated and autonomous systems will change the user experience and the relationship to digital technologies on both an individual and societal level. As a community, we need to find ways for humans to understand AI-based systems while retaining human control and oversight.
In the next years, the role of HCI in the conception, design and implementation of AI applications will be defined. The relationship between researchers and developers in AI and HCI is becoming essential to create real values for humans as well as for humanity. Advancing intelligent interactive systems needs skills and insights from both disciplines. Many properties, such as understandability, reliability, trustworthiness and safety cannot be considered without a deep understanding of the user experience and the interaction angle. However, so far, many applications are not focusing on people; they are not human-centred. Ensuring meaningful human interaction with AI is the key to the mass adoption of intelligent technologies.
The new Working Group aims to shift the focus to how AI can empower humans and support their endeavours. It emphasises the human side of the interaction between people and AI. To reach this goal, an operation under both TC13 (HCI) and TC12 (AI) will allow us to research and develop scientific foundations for Human-Centred Intelligent Interactive Systems. Such foundations include methods, models and algorithms for constructing and evaluating these systems. The new working group will fulfil the mandate as an IFIP technical asset by researching and aiding the technical development of Human-Centred Intelligent Interactive Systems as a new dimension of computer science. It will add to the body of knowledge in informatics and contribute new insights, methods and tools to the profession and aims at advancing society.
Members of the WG
TBA
WG 12.13 – AI for Global Security
- Details
- Category: Working groups
- Published: 30 August 2013
Officers
Chair
Mr Dominique Verdejo, AI and Security, Montpellier, Occitanie, France, e-mail: This email address is being protected from spambots. You need JavaScript enabled to view it.
Co-Chair
Dr Frédérique Segond, Director at Inria, France, e-mail: This email address is being protected from spambots. You need JavaScript enabled to view it.
Secretary
Dr Arumugam Chamundeswari, SSN College of Engineering, Rajiv Gandhi Salai(OMR), Kalavakkam,Tamil Nadu, India, e-mail: This email address is being protected from spambots. You need JavaScript enabled to view it.
Website
Aims
The main purpose of the AISEC is to debate and demonstrate how AI can help manage all aspects of global security. We wish to collaborate with other IFIP or non IFIP WGs groups, addressing the related topics separately. It includes fostering international collaboration and bringing fresh ideas/opinions from a multidisciplinary, multilateral and multicultural group of stakeholders including AI and security experts (academia and other related) and students. We also have ambition to elaborate reasonable mechanisms for AI governance and to mitigate AI risks.
As AI Governance has become an urgent issue for Humanity, calling for responsible design, development, deployment and use of AI, including regulation and standards, security requires AI to augment the capacities of Defense and Security personnel to achieve their missions efficiently in the increasing complexity of our world.
In its constitution, UNESCO invites people to apply deep thinking about peace as a protection against war, it is along this line that we would like the global security working group to foresee threats and attached risks that endanger peace within our societies as well as between our societies and nations and promote the use of AI as a mitigation of these risks. We also foresee the need for thinking about AI vulnerabilities and potential deception that could hinder its capabilities in defense processes.
Thinking about AI and security requires symmetrically that we envisage how AI may be used as a weapon by adverse forces having similar capacities. AI for attacks including autonomous weapons, intelligent malwares, malevolent bots, must be kept in the radar of our working group to avoid surprises.
AI can be leveraged for improving global security in a variety of application domains including cyber, physical, economic and information security. This proposed working group will examine issues relating to the use of Artificial Intelligence in both offensive and defensive security and will bring together leading researchers, organizations and industry experts from around the world.
Scope
All the different facets of security share a common process of risk assessment, surveillance and response that is at the core of decision making frameworks in security management situations like Col John Boyd’s “OODA (Observe-Orient-Decide-Act) loop” or NIST cybersecurity framework “Identify, Protect, Detect, Respond, Recover”. These methods are generic enough to be used in many different domains provided that they are used by operators relying on a deep domain knowledge. Injecting AI in these frameworks as a booster of human decision seems the most ethical and the less dangerous way of using AI while maintaining the human operator at the heart of decision making.
“The shift to AI involves a measure of reliance to an intelligence of considerable analytical potential operating on a fundamentally different experiential paradigm and human operators must be involved to monitor and control AI actions”
The Age of AI, by Henry Kissinger, Eric Schmidt and Daniel Huttenlocher
As an eclectic group of research labs, individual domain experts and AI or security related enterprises, the WG12.13 main objective is to define how AI can help security, from a global and generic standpoint. It does not limit the focus on AI inferencing technologies but opens it to knowledge management, modeling and simulation and human-machine interactions, using new interface modalities like VR or AR that, combined with natural language processing, are key enablers of AI integration into human security operations. Similarly, considering security is not a state but rather a process, integrating AI in this process requires thinking in terms of continuous improvement, continuous augmentation of security operations. This means integrating AI in each and every phase of security, from the earliest phases of intelligence gathering up to threat detection and response phases.
AI technologies considered
- Learning and Intelligence
- Knowledge management, modeling and simulations
- Deep learning and machine learning
- Natural language processing
- Data mining
- Artificial vision, video analytics
- Multi-agents and autonomous systems
- Ontologies
These technologies superimpose over the following Global Security Technology Matrix to serve the various needs of distinct security processes in every domain.
Domain / Processus | Risk assessment | Security implementationData acquisition |
Monitoring Surveillance Anticipation |
Forensic Response Containment Investigation |
Data Security | Privacy Impact Assessment | Contract clauses | Data Leak Prevention | Privacy Impact evaluation |
Economic Security | Market ResearchMappingSegmentation | OSINT* SOCMINT* |
Competitive IntelligenceAutomated Moderation | Crisis Management |
Systems Security | ISO 27005 | ISO 27002 EDR* |
SOC* SIEM* UEBA* SOAR* CTI* |
NDR* Analysis |
Physical Security | Site Audit | Access Control Biometry Video surveillance Drones Intrusion detection |
C4I* Control rooms Operation centers |
Autonomous systems First responders Forensic |
P | D | C | A |
Members
Status |
Name |
Country |
Role |
|
Organization |
Profile |
confirmed |
Dominique VERDEJO |
France |
Chair |
This email address is being protected from spambots. You need JavaScript enabled to view it. |
Personal Interactor |
|
confirmed |
Frédérique SEGOND |
France |
Co-chair |
This email address is being protected from spambots. You need JavaScript enabled to view it. |
INRIA |
|
confirmed |
Arumugam CHAMUNDESWARI |
India |
Secretary |
This email address is being protected from spambots. You need JavaScript enabled to view it. |
SSN College of Engineering |
|
confirmed |
Patrick PERROT |
France |
Member |
This email address is being protected from spambots. You need JavaScript enabled to view it. |
Gendarmerie Nationale |
|
confirmed |
Brett van NIEKERK |
South Africa |
Member |
This email address is being protected from spambots. You need JavaScript enabled to view it. |
University of Kwazulu-Natal |
|
confirmed |
Namrata PATEL |
France |
Member |
This email address is being protected from spambots. You need JavaScript enabled to view it. |
ONAOS |
|
confirmed |
Gustavo GONZALEZ GRANADILLO |
Spain |
Member |
This email address is being protected from spambots. You need JavaScript enabled to view it. |
SCHNEIDER ELECTRIC |
|
confirmed |
Faisal ZAMAN |
Ireland |
Member |
This email address is being protected from spambots. You need JavaScript enabled to view it. |
ACCENTURE |
|
confirmed |
Frederick BENABEN |
France |
Member |
This email address is being protected from spambots. You need JavaScript enabled to view it. |
IMT |
|
confirmed |
Thinagaran PERUMAL |
Malaysia |
Member |
This email address is being protected from spambots. You need JavaScript enabled to view it. |
University of Putra |
|
confirmed |
Sanjeet KUMAR NAYAK |
India |
Member |
This email address is being protected from spambots. You need JavaScript enabled to view it. |
IIITDM Kancheepuram |
//www.iiitdm.ac.in/People/displayProfileFaculty.php?This email address is being protected from spambots. You need JavaScript enabled to view it.">https://www.iiitdm.ac.in/People/displayProfileFaculty.php?This email address is being protected from spambots. You need JavaScript enabled to view it. |
confirmed |
Michael COOLE |
Australia |
Member |
This email address is being protected from spambots. You need JavaScript enabled to view it. |
Edith Cowan University |
https://www.ecu.edu.au/schools/science/staff/profiles/senior-lecturers/dr-michael-coole |
confirmed |
Ray LANE |
Ireland |
Member |
This email address is being protected from spambots. You need JavaScript enabled to view it. |
AQUILABIOSCIENCE |
WG 12.12 – AI Governance
- Details
- Category: Working groups
- Published: 30 August 2013
Officers
Chair
Anthony Wong, IFIP Vice President, AGW Lawyers & Consultants, Australia
Co-Chair
Amal El Fallah Segrouchni, University Sorbonne Paris, COMEST, France
Secretary
TBA
Aims
The main purpose of the AIGOV will be to connect with selected groups working on AI Governance, fostering international collaboration and bring fresh ideas/opinion from a multidisciplinary, multilateral and multicultural group of stakeholders including AI experts and students. It is also to elaborate on some reasonable mechanisms for AI governance and for the mitigation of AI risks.
Background and context to WG12.12 (AIGOV), with special accent on AI for Humanity
AI Governance has become a pressing issue for Humanity and recent global developments advocate for new frameworks, structures and processes for better governance and for responsible design, development, deployment and use of AI. These include:
- AI Ethical Frameworks
- AI Regulation
- AI Standards
- AI By Design and Impact Assessment Frameworks
- AI Auditing, Certification and Compliance, to name a few.
The Ethics of AI has also been the subject of many debates worldwide. Prof. Steven Hawkins and Elon Musk have elaborated 23 ethical principles for AI, and in 2019, jurisdictions including Australia1 and the EU published their frameworks2, adding to the lists of contributors including the OECD Principles on Artificial Intelligence3, the World Economic Forum AI Governance: A Holistic Approach To Implement Ethics Into AI4 and the Singapore Model AI Governance Framework5, to name a few. WEF has released a 5-step guide to scale responsible AI. 6
The UN Secretary-General in his June 2020 report commented that, “there are currently over 160 organizational, national and international sets of artificial intelligence ethics and governance principles worldwide”7 and calls for a common platform to bring these separate initiatives together.
UNESCO was given the mandate by its Member States to develop an international standard-setting instrument on the ethics of artificial intelligence, which is to be submitted to the UNESCO General Conference in the later part of 2021. UNESCO has just released the first draft of the international standard-setting instrument on the ethics of artificial intelligence (The Recommendation),8 following on from a preliminary study on the ethics of artificial intelligence by the Extended Working Group on Ethics of Artificial Intelligence (AI) of the World Commission on the Ethics of Scientific Knowledge and Technology (COMEST)9
COMEST in its report, reflected that “AI is a distributed technology, whose current practical governance is spread across numerous institutions, organizations and companies, the reflection on its good governance requires a pluralistic, multidisciplinary, multicultural and multistakeholder approach, opening up questions about what type of future we want for humanity. This reflection needs to address the main challenges in the development of AI technologies related to the biases embedded in algorithms, including gender biases, the protection of people’s privacy and personal data, the risks of creating new forms of exclusion and inequalities, the issues of just distribution of benefits and risks, accountability, responsibility, impacts on employment and the future of work, human dignity and rights, security and risks of dual use”.10
It is the general view that the time has arrived to move from principles and to operationalize on the ethical practice on AI. 11 As stated by Fjeld et al., the impact of a set of principles is “likely to depend on how it is embedded in a larger governance ecosystem, including for instance relevant policies (e.g. AI national plans), laws, regulations, but also professional practices and everyday routines”.12 The view also resonated with those of UNESCO. UNESCO has advocated for Member States to put in place policy actions and oversight mechanisms to operationalize the values and principles in the UNESCO Recommendation.
One of the objectives of the UNESCO Recommendation is to provide a universal framework of values, principles and actions to guide Nation States in the formulation of their legislation, policies or other instruments regarding AI.13
In October 2020, the European Parliament adopted resolutions to regulate AI, setting the pace as a global leader in AI regulation.14 The 3 resolutions cover the ethical and legal obligations surrounding AI, civil liability setting fines of up to 2 million euros for damage caused by AI; and intellectual property rights.15 In response, the European Commission has published draft legislation addressing AI by obliging high-risk AI systems to meet mandatory requirements related to their trustworthiness.
In April 2021, in a revolutionary milestone, the European Commission proposes the first AI legal framework, that could set new benchmarks and global norms for the global regulation of AI. The global implications could be similar to that of the EU’s General Data Protection Regulation (GDPR). The proposal followed on from intense debates on ethics of AI over the last few years and adopts a risk-based approach, differentiating between 3 categories of risks: uses of AI that create (i) an unacceptable risk, (ii) a high risk, and (iii) low or minimal risk.
The legislative proposals contained a list of prohibited practices where uses of AI are considered unacceptable. These include practices that have significant potential to manipulate persons or exploit vulnerabilities of specific groups, AI-based social scoring and, the use of biometric systems in publicly accessible spaces unless certain limited exceptions apply. Fines of up to €30 million or 6% of worldwide annual turnover, have been proposed.
AI systems identified as high-risk are subject to more stringent requirements and include critical infrastructures (e.g. transport); scoring to determine access to educational or vocational training; safety of products; employment; essential services; law enforcement; and administration of justice.
1 Australian AI Ethics Framework (2019). https://www.industry.gov.au/data-and-publications/building-australias-artificial-intelligence-capability/ai-ethics-framework, last accessed 2020/6/6
2 European Commission: Ethics guidelines for trustworthy AI (2019). https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai, last accessed 2020/6/6
3 OECD, OECD Principles on Artificial Intelligence (22 May 2019), https://www.oecd.org/going-digital/ai/principles/, last accessed 2020/6/20
4 World Economic Forum: AI Governance: A Holistic Approach to Implement Ethics into AI, https://www.weforum.org/whitepapers/ai-governance-a-holistic-approach-to-implement-ethics-into-ai, last accessed 2020/6/20
5 Singapore Model AI Governance Framework, https://www.pdpc.gov.sg/-/media/files/pdpc/pdf-files/resource-for-organisation/ai/sgmodelaigovframework2.pdf, last accessed 2020/6/20
6 https://www.weforum.org/agenda/2021/03/scaling-up-with-responsible-ai-a-5-step-guide-for-companies/
7 Report of the UN Secretary-General, Road map for digital cooperation: implementation of the recommendations of the High-level Panel on Digital Cooperation, www.un.org/en/content/digital-cooperation-roadmap/, June 2020 p 18, www.un.org/en/content/digital-cooperation-roadmap/ accessed January 2021.
8 The first draft of the recommendation submitted to Member States proposes options for action to Member States and other stakeholders and is accompanied by concrete implementation guidelines. The first draft of the AI Ethics Recommendation is available at https://unesdoc.unesco.org/ark:/48223/pf0000373434; https://en.unesco.org/artificial-intelligence/ethics
9 Preliminary study on the technical and legal aspects relating to the desirability of a standard-setting instrument on the ethics of artificial intelligence - UNESCO Digital Library; COMEST - Membres de la COMEST (unesco.org)
10 Preliminary study on the technical and legal aspects relating to the desirability of a standard-setting instrument on the ethics of artificial intelligence - UNESCO Digital Library, paragraph 3
11 See also the opinion of the High Level Panel Follow-up Roundtable 3C Artificial Intelligence - 1st Session, www.un.org/en/pdfs/HLP%20Followup%20Roundtable%203C%20Artificial%20Intelligence%20-%201st%20Session%20Summary.pdf
12 Fjeld, Jessica and Achten, Nele and Hilligoss, Hannah and Nagy, Adam and Srikumar, Madhulika, Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI (January 15, 2020). Berkman Klein Center Research Publication No. 2020-1, Available at SSRN: https://ssrn.com/abstract=3518482 or http://dx.doi.org/10.2139/ssrn.3518482
13 UNESCO first draft of the AI Ethics Recommendation, Resolution 68, available at https://unesdoc.unesco.org/ark:/48223/pf0000373434
14 European Parliament resolution of 20 October 2020 with recommendations to the Commission on a framework of ethical aspects of artificial intelligence, robotics and related technologies (2020/2012(INL)), https:// www.europarl.europa.eu/doceo/document/TA-9-2020-0275_EN.pdf
15 Three Resolutions on the ethical and legal aspects of Artificial Intelligence software systems (“AI”): Resolution 2020/2012(INL) on a Framework of Ethical Aspects of Artificial Intelligence, Robotics and related Technologies (the “AI Ethical Aspects Resolution”), Resolution 2020/2014(INL) on a Civil Liability Regime for Artificial Intelligence (the “Civil Liability Resolution”), and Resolution 2020/2015(INI) on Intellectual Property Rights for the development of Artificial Intelligence Technologies (the “IPR for AI Resolution”)
First event
Panel (IFIP supported event) AI Ethics and Governance on IJCAI 21, Montréal, Canada (online) August 25th, 2021 with the participation of Anthony Wong, Amal El Fallah – Segrouchni, David-Raphael Bravo-Marcial and Eunika Mercier-Laurent.
Members
Ulrich Furbach |
University of Koblenz |
Germany |
Eunika Mercier-Laurent |
Chair TC12 |
France |
David Kreps |
Chair TC 9 |
Ireland |
Christina Zoi Mavroedi |
Epita student |
Greece |
David Raphael Bravo |
Epita student |
Venezuela |
Nominee from the Responsible AI Institute (to be confirmed by Ashley Casovan Executive Director) |
Responsible AI Institute |
Non-profit organization www.responsible.ai Canada |
John MacIntyre (to be confirmed), |
Dean of the Faculty of Applied Sciences, and Pro Vice Chancellor at the University of Sunderland, UK, Editor. Springer Journal: AI and Ethics https://sure.sunderland.ac.uk/profile/63 |
UK |
Joanna J Bryson (to be confirmed) |
Professor of Ethics and Technology at Hertie School of Governance in Berlin About — Joanna Bryson (joannajbryson.org) |
Germany and UK |
TC12’s Working Groups
- Details
- Category: Working groups
- Published: 23 July 2013
The TC 12 working groups operate independently with their own aims and objectives. The TC12 working groups are:
- WG 12.1 – Knowledge Representation and Reasoning
- WG 12.2 – Machine Learning and Data Mining
- WG 12.3 – Intelligent Agents
- WG 12.4 – Semantic Web (closed)
- WG 12.5 – Artificial Intelligence Applications
- WG 12.6 – Knowledge Management
- WG 12.7 – Social Networking Semantics and Collective Intelligence
- WG 12.8 – Intelligent Bioinformatics and Biomedical Systems (closed)
- WG 12.9 – Computational Intelligence
- WG 12.10 - Artificial Intelligence & Cognitive Science
- WG 12.11 - Artificial Intelligence for Energy and Sustainability
- WG 12.12 - AI Governance
- WG 12.13 - AI for Global Security
- WG on Human-Centered Artificial Intelligence (Joint WG with TC 13)
Follow the links on the main menu (Working Groups) for details of each working group.
Subcategories
-
Knowledge Representation and Reasoning
This group’s website is at: http://www.cis.hut.fi/research/compcogsys/ifip-wg12.1/
Officers
Chair
Dr. Timo Honkela, Helsinki University of Technology, email: This email address is being protected from spambots. You need JavaScript enabled to view it.
Vice-Chair
TBA
Secretary
TBA
Aim
To study and develop theory and techniques for knowledge representation and reasoning.
Scope
The scope of the Working Group’s activities includes (but is not restricted to) the following:
- Abductive Reasoning
- Inductive Reasoning
- Non-monotonic Reasoning
- Reasoning about Actions and Change
- Spatial Reasoning
- Temporal Reasoning
- Automated Reasoning
- Computational Logic
- Logic Programming
- Situation Calculus
- Production Systems
- Semantic Networks
- Frames
- Object-orientated Representation
- Bayesian Networks
- Machine Learning and Data Mining
- Intelligent Agents
- Semantic Web
- Artificial Intelligence Applications
- Knowledge Management
- Social Networking Semantics and Collective Intelligence
- Intelligent Bioinformatics and Biomedical Systems
- Computational Intelligence
- Energy and Sustainability