Library | SDGs
GOAL 09: Industry, Innovation and Infrastructure
Refine
205 results
REFINE
SHOW: 16
Toward a G20 framework for artificial intelligence in the workplace
This report advocates for creating a high-level, G20 framework using a set of principles for the introduction and management of big data and AI in the workplace. The paper identifies main issues and suggests two paths towards adoption.
Top 10 principles for ethical artificial intelligence
This report provides 10 principles for ethical artificial intelligence. From transparency in decision-making to ensuring a just transition and support for fundamental freedoms and rights, the report aims to empower workers and maintain a healthy balance of power in the workplace.
The state of AI in 2022 - and a half decade in review
The adoption of AI has more than doubled, with a peak of 58% in past years. The report highlights the importance of best practices and investing in AI as it is shown to bring financial returns. However, the majority of organisations are not mitigating risks associated with AI despite increasing use.
The implications of AI across sectors and against 6 key ESG considerations
AI offers great positive impacts and risks. This report helps to understand the risks associated with developing and using AI tech. Scoping exercise identifies opportunities and threats across sectors. Six core ESG considerations including trust and security, data privacy, and sentience are evaluated for potential impact.
The impact of digital technology on human rights in Europe and Central Asia
This report examines the impact of digital technology and artificial intelligence on human rights in Europe and Central Asia, with a particular focus on the use of data protection and legislative frameworks. It provides an overview of the relevant international and regional initiatives, and analyses the applicable legal, regulatory, and institutional frameworks.
The global governance of artificial intelligence: Next steps for empirical and normative research
This analytical essay outlines an agenda for research into the global governance of artificial intelligence (AI). It distinguishes between empirical research, aimed at mapping and explaining global AI governance, and normative research, aimed at developing and applying standards for appropriate global AI governance.
Statement on artificial intelligence, robotics and 'autonomous' systems
This statement from the European Group on Ethics in Science and New Technologies emphasises the need for a shared international ethical and legal framework for the design and governance of artificial intelligence, robotics, and 'autonomous' systems. It also proposes ethical principles based on EU values to guide the framework's development.
Safety by design: Model clauses for due diligence arrangements and funding agreements
This document provides model clauses for due diligence arrangements and funding agreements related to eSafety for startups. It includes pre-conditions for funding agreements, covering policies, staffing, training, and external communication. Additionally, the document urges startups to complete the eSafety assessment tool and implement safety by design measures.
Report of COMEST on robotics ethics
COMEST has released a report on robotics ethics which covers the history, development, and social impact of robots. It also offers recommendations for the ethical use of robotics.
Investors' expectations on responsible artificial intelligence and data governance
This report outlines responsible AI and data governance principles and engagement framework for investors across multiple sectors. The six core principles aim to enhance machine learning, auditability, explainability, and transparency, while taking into account legal, regulatory, ethical, and reputational risks.
Generative artificial intelligence in finance: Risk considerations
Generative AI is a subset of AI/ML that creates new content. It offers enhancements to efficiency and customer experience, as well as advantages to risk management and compliance reporting. However, the deployment of GenAI in the financial sector requires the industry to recognise and mitigate the technology's risks comprehensively; financial institutions must strengthen their cybersecurity and regulatory oversight capacities.
Ethics guidelines for trustworthy AI
The European Commission's AI High-Level Expert Group has released their Ethics Guidelines for Trustworthy AI. The report provides a framework for creating lawful, ethical, and robust AI systems throughout the system's life cycle. The guidelines focus on respect for human autonomy, prevention of harm, fairness, and explicability.
Ethically aligned design: A vision for prioritising human well-being with autonomous and intelligent systems
This report is a call-to-action for technologists to align creation of autonomous and intelligent systems to defined values and ethical principles prioritising human well-being. Emphasising the importance of embedding values and morals into these systems, it discusses a range of topics, including job automation, personal data protection, A/IS education, law, and more.
Engaging the ICT sector on human rights: Privacy and data protection
This report provides sector-wide risk assessment on privacy and data protection in the Information and Communications Technology (ICT) industry. It includes international standards and salient issues to consider when engaging with ICT companies, the "business case" for privacy and data protection, and investor guidance for engaging ICT companies.
Engaging the ICT sector on human rights: Conflict and security
This report provides an overview of the main human rights instruments and adverse impacts of the ICT sector in conflict-affected areas, emphasising its role in promoting security and other human rights while highlighting the potential risks of new technologies in this context. It also includes investor guidance to help evaluate if companies are meeting their human rights responsibilities.
Beyond explainability: A practical guide to managing risk in machine learning models
This report offers a comprehensive guide for effectively managing risk in machine learning models. It presents a framework that enables data science and compliance teams to create better, more accurate, and more compliant models. The report stresses the importance of understanding the data used by models and implementing three lines of defence to assess and ensure their safety.