About the event

The world becomes more urbanized. Over 56% of the world population now lives in cities, and this number can go as high as 97% in Belgium. In parallel, technology plays an increasingly important role in these environments. Specifically: Artificial intelligence, Data, and Robotics have become an integral part of living in cities. We may not always be aware of their presence, but they are playing a critical role in our daily commutes, government services, local stores, or in the public space.

Concepts such as "smart cities" have emerged and reflect the continuing desire of cities to implement these technologies on a large scale to address local challenges - one of which is, of course, becoming even more sustainable. AI and data are now being actively used in a variety of areas, from energy sharing to biodiversity monitoring. Despite these hopes, growing concerns have emerged about the overall inhibiting impact of these technologies on sustainability. In addition, ethical and legal concerns have emerged, to the point where cities enact local responsible use principles, also insisting on greater citizen inclusion.

This conference will investigate if and how AI, Data, and Robotics, can become local and are truly sustainable for cities.

READ THE 2022 SUMMARY & RECOMMENDATIONS

HOURS

0

Seats

0

Speakers

0

Sessions

0

Conference Theme

Eight international sessions surrounding “Local and Sustainable AI, Data, and Robotics”

Localism for AI, Data, and Robotics

Developing a so-called “smart” city with the use of AI, data, and robotics entails a series of actions and decisions done within the context of that city and the community. This has been called “AI localism”. Moreover, the question is: how can AI help address problems in very local contexts or projects? How are citizens involved in the design of socially relevant algorithms or robots? How can decision-makers calibrate AI, Data, and Robotics policies for local contexts and needs?

How are cities around the world creating strategies around AI, Data and robotics based on their unique needs? How do we make sure AI will solve a city-specific challenge while keeping at par with AI policies and trends in Europe and internationally?

Localism for AI, Data, and Robotics

Developing a so-called “smart” city with the use of AI, data, and robotics entails a series of actions and decisions done within the context of that city and the community. This has been called “AI localism”. Moreover, the question is: how can AI help address problems in very local contexts or projects? How are citizens involved in the design of socially relevant algorithms or robots? How can decision-makers calibrate AI, Data, and Robotics policies for local contexts and needs?

How are cities around the world creating strategies around AI, Data and robotics based on their unique needs? How do we make sure AI will solve a city-specific challenge while keeping at par with AI policies and trends in Europe and internationally?

closepopup
Sustainability: Preserving environment, social, and economic vitality

While being used to address climate change-related challenges, or optimize the use of critical resources, AI could have a large impact on the environment. Beyond the environment, their use to optimize public services has raised many hopes, while also being associated with scandals.

In parallel, reports claim that AI could help increase global GDP by up to 14% by 2030, contributing to economic and social growth. But from Robots, to AI systems, are they able to address and solve so many societal challenges all at once?

How do we balance the use of these technologies as a source for hope, while at the same time solving its disruptive effect on global sustainability? How do we ensure that these technological tools, algorithms, data spaces, are doing their job while not harming society?

Sustainability: Preserving environment, social, and economic vitality

While being used to address climate change-related challenges, or optimize the use of critical resources, AI could have a large impact on the environment. Beyond the environment, their use to optimize public services has raised many hopes, while also being associated with scandals.

In parallel, reports claim that AI could help increase global GDP by up to 14% by 2030, contributing to economic and social growth. But from Robots, to AI systems, are they able to address and solve so many societal challenges all at once?

How do we balance the use of these technologies as a source for hope, while at the same time solving its disruptive effect on global sustainability? How do we ensure that these technological tools, algorithms, data spaces, are doing their job while not harming society?

closepopup
Citizen Engagement:Involving citizens for better digital transformation

How do we make sure the future of AI, data, and robotics will serve the Common Good? How should these technologies be implemented to address pressing challenges such as climate crisis, humanitarian conflicts, and health crisis, while respecting fundamental rights and ethical safeguards? How can citizen engagement play a role in all of these?

Citizen Engagement:Involving citizens for better digital transformation

How do we make sure the future of AI, data, and robotics will serve the Common Good? How should these technologies be implemented to address pressing challenges such as climate crisis, humanitarian conflicts, and health crisis, while respecting fundamental rights and ethical safeguards? How can citizen engagement play a role in all of these?

closepopup

Conference Schedule

The FARI Brussels Conference agenda is continuously being updated and improved to give you the best and most insightful conference experience. We will release the final program and presentation titles weeks before September 11, but here's a sneak peak of what is cooking!

  • September 11
  • September 12
  • Registration & Coffee (9:00-10:00)
  • Opening Ceremony (10:00 - 10:30)
  • Local & Sustainable AI (10:30-12:00)
  • Lunch Break (12:00-13:00)
  • Citizens & AI (13:00-14:30)
  • Mobility & AI (14:35-16:00)
  • Coffee Break (16:00-16:30)
  • Public Services & AI (16:30-18:00)
  • Cocktail (18:10-21:00)
09:00 - 10:00Registration, welcome coffee & networking (9:00-10:00)
10:00 - 10:05Opening Ceremony By Carl Mörch and Hans de CanckCo-directors, FARI
10:05 - 10:10Welcome speech by State Sec. Thomas Dermine By Thomas DermineState Secretary for Science Policy, Recovery Program and Strategic Investments, Belgian Federal Government
10:10 - 10:15Welcome speech by Minister Bernard Clerfayt By Bernard ClerfaytMinister of the Government of the Brussels-Capital Region, responsible for Employment and Vocational Training, Digital Transition, Local Authorities and Animal Welfare
10:15 - 10:20Welcome speech by State Sec. Barbara Trachte By Barbara TrachteSecretary of State of the Brussels-Capital Region, responsible for Economic Transition and Scientific Research
10:20 - 10:25Welcome speech by Annemie Schaus By Annemie SchausRector, Université Libre de Bruxelles
10:25 - 10:30Sustainability and AI: Setting the scene By Aimee van WynsbergheDirector, Institute for Science and Ethics, University of Bonn
10:30 - 12:00Panel discussion: Local and Sustainable AI, when technologies meet communities By Ann NoweHead, AI Lab, Vrije Universiteit Brussel

Linking up international perspectives to Brussels reality

Urban AI Governance: What does AI Localism look like in action? By Stefaan VerhulstCo-Founder, The GovLab, New York University

Dr. Stefaan G. Verhulst is Co-Founder, Chief of R&D, and Director of the Data Program of the Governance Laboratory (The GovLab) and Co-Founder and Principal Scientific Advisor of the Data Tank – a do and think tank on data. Verhulst’s latest scholarship centers on how responsible data and technology can improve people’s lives and the creation of more effective and collaborative forms of governance. Specifically, he is interested in the perils and promises of collaborative technologies and how to harness the unprecedented volume of data and information to advance the public good.

Indigenous Protocols for Artificial Intelligence By Angie AbdillaFounder and Director of Old Ways

In this talk I will share our journey starting with an international group of Indigenous technologists at the inaugural workshop series in Hawaii in 2019, leading to the IP//AI Incubator in March, 2021. Key learnings from the foundations of these works were the need for Indigenous AI to be regional in nature, conception, design and development, to be tethered to localised Indigenous laws inherent to Country, to be guided by local protocols to create the diverse standards and protocols required for the developmental processes of AI, and to be designed with our future cultural interrelationships and interactions with AI’s in mind. Through Country Centered Design we established some broad principles and protocols and then moved towards a test case, running some preliminary trials applying an Aboriginal kinship system as a selection framework in genetic computing. Our findings throughout this process were encouraging, indicating that there is potential for Indigenous Knowledge to guide the design and engineering principles and practices of AI, bridging the current ontological and epistemological divides between machines, humans and the environment.

Professor Angie Abdilla is a palawa woman. She is the founder and Director of Old Ways, New and works with Indigenous knowledges and systems in the technology. She is a Professor at the Australian National University’s School of Cybernetics and she co-founded the Indigenous Protocols and Artificial Intelligence working group (IP//AI). Angie is a member of the CSIRO Data61 Advisory Group, the National AI Centre’s Think Tank and was member of the Global Futures Council on Artificial Intelligence for Humanity as part of the World Economic Forum in 2022.

Responsible AI for local urban contexts : AI & Cities report approach and lessons learned By Benjamin Prud’Homme & Shazade Jameson Executive Director, AI for Humanity, Mila (Quebec) & Senior Consultant Tech Governance, Mila

In October 2022, UN Habitat and Mila – Quebec AI Institute jointly released the “AI and Cities: Risks, Applications and Governance” report. This collaborative effort aimed to provide insights on the opportunities and challenges of using AI to support the development of people-centered and sustainable cities. We will present the approach and structure of the report, including the need for inclusive governance and contextualised implementation. Building from lessons learned in the context of this work, we will then explore different approaches to developing AI that is sensitive to local contexts and embedded in a value-driven governance framework.

Repairing AI: Taking urban AI ethics out of the cloud By Taylor StoneSenior Researcher , Institute for Science and Ethics, University of Bonn

This talk will explore what it means to repair how we think about urban AI: as a material infrastructure that is shaping, and shaped by, its own operating environment. Digital urban innovations are proliferating rapidly, often in the name of sustainable goals. This includes improvements to the efficiency of existing systems, but also new interventions such as the monitoring and management of urban ecosystems. While this offers unique opportunities for the future of urban sustainability, such innovations should not be taken lightly. As AI-enabled innovations becomes central to restoration efforts and management practices, they will increasingly mediate our relation to urban environments. Further, this digital infrastructure comes with a very real physical footprint including energy use, resource extraction, and physical hardware. From this perspective, we can take urban AI ethics out of the cloud and into the world of material things – as a tool to repair our material world, as an infrastructure with an ecological footprint, and as formative to how ‘built’ and ‘natural’ environments interact and co-evolve. This reveals both challenges – and opportunities – to realising localised and sustainable AI in practice.

Taylor Stone is a Senior Researcher at the Institute for Science and Ethics, University of Bonn, and member of the Bonn Sustainable AI Lab. His research is at the crossroads of philosophy and design, focused on to envisioning and enacting sustainable urban futures. He works on the ethics of emerging technologies and their impacts on cities, and specialises in urban lighting. His writing has appeared in several academic journals, he edited Technology and the City: Towards a Philosophy of Urban Technologies, and is an associate editor of the new Philosophy of the City Journal.

Optimize the existing system with AI, or explore alternative futures through foresight co-design methods? By Christophe AbrassartProfessor of Sustainable Design, Foresight & Innovation

In my talk, I will present a decisive distinction between two approaches to think about local and sustainable AI. The first follows the dominant reasoning of many AI experts and consists of mobilizing AI and robotics to optimize the existing system (e.g. the optimization of current path transports in the city, with the risk of maintaining the status quo on the use of cars and reinforcing path dependencies). The second approach is based on scenarios in 2030 or 2050 featuring various digital devices (high-tech or low-tech) to foster paradigm shifts in the functioning of cities (e.g. the generalized shift to active mobility in the quarter-hour city, or the safe transition towards electric mini-mobility). This second approach, which I recommend for imagining cities that can operate within planetary boundaries and ensure environmental justice, involves design fiction methods and “foresight co-design” methods that are very interdisciplinary and open to citizens.

Christophe Abrassart is a professor at the School of Design of the Faculty of Planning at University of Montreal, where he co-leads the Lab Ville Prospective and organizes regularly foresight co-design workshops with stakeholders and citizens. He teaches methods of ecodesign, participatory design and strategic foresight applied to ecological transition. In 2018, he was responsible for the co-construction process of the Montreal Declaration on Responsible AI. He is also a researcher at OBVIA in Québec where he works on the theme of “digital sobriety”.

Local Respondent By Marc Van den BosscheCo-Director, paradigm.brussels
12:00 - 13:00Lunch Break & Visit of Partner stands (HORTA HALL)
13:15 - 14:30Panel Discussion: Methodologies to involve citizens around AI, Data, Robotics By An JacobsProfessor, Vrije Universiteit Brussel
Involving citizens for wider adoption of robotics in Europe By Mette SimonsenProject Manager, Danish Board of Technology / Citizen Engagement Coordinator, Robotics4EU Project

Robot technologies are undergoing rapid development and its implementation can cause major change and disruptions to society. These changes are already impacting much of the world around us. Areas such as production, transportation, agriculture, healthcare, and others are becoming increasingly reliant on automation and robotic technology. Therefore, it is also essential to explore how these societal changes are perceived and received by “regular citizens”, namely individuals who are not directly involved in the development of—or consulted for— when it comes to the design of robots. This presentation will emphasize the importance of engaging society in the discussion of robotics and highlight the efforts of the Robotics4EU project in this regard. We will showcase the outcomes of the “GlobalSay on Robotics,” a comprehensive citizen consultation conducted in October and November 2021 across 12 countries. The consultation aimed to gain insights into citizens’ perspectives on non-technological aspects of robotics, including potential benefits, risks, and barriers to widespread adoption in society. Additionally, we will present the results of a European-wide citizen consultation focused on validating robotics business ideas. This consultation involved 11 different companies, each presenting their robotics business ideas and receiving valuable feedback from citizens across Europe. By sharing these findings, we aim to foster a better understanding of the societal impact of robotics and the importance of involving citizens in shaping its future.

Mette Marie Simonsen is a Project Manager at the Danish Board of Technology and serves as the Work Package leader for the citizen engagement activities in the Robotics4EU project. With a Master’s degree in Sustainable Design Engineering from Aalborg University, she specializes in applying citizen and stakeholder engagement methods to research and innovation projects of novel technologies. Could potentially leave out the education part. Feel free to alter it to how you want or let me know if you need different information

Should we include AI agents in deliberation about AI? By Marc-Antoine DilhacDirector of the Algora Lab,

If we are to include in a political deliberation the people whose interests are in question, shouldn’t we include the AI systems whose deployment is being discussed? This proposal sounds counter-intuitive, since it seems unreasonable to assign an interest or a motivation to artificial agents. But there might be an alternative rationale for their inclusion, one that is epistemic and not immediately political (democratic equality, for example).

Marc-Antoine Dilhac is director of the Algora Lab, a research lab studying the ethical and societal impact of AI and digital innovation. He is professor of philosophy at the Université de Montréal and holds the CIFAR Chair in AI ethics. In 2017, he instigated the Montréal Declaration for a Responsible Development of AI and, in 2020, led a global consultation on AI ethics for UNESCO. He sits on the Government of Canada’s AI Advisory Council and co-chair its Public Awareness Working Group. He holds a PhD in political philosophy from the University of Paris 1 Panthéon-Sorbonne. He specialized in theories of democracy and social justice, and applied ethics with a focus on AI ethics.

Empowering Youth Participation: AI’s Influence on Democracy and CollectiveUP’s Comprehensive Solution By Liliana CarrilloFounder and CEO of CollectiveUP.

In an age of rapid technological advancement, Artificial Intelligence (AI) is increasingly reshaping the fabric of our democratic societies. This brief presentation delves into the profound impact of AI on the participation of young citizens in the democratic process. As AI algorithms influence information dissemination, political campaigns, and decision-making, it is crucial to scrutinize how these advancements both empower and challenge the engagement of our younger generation.

Amidst this transformative period, CollectiveUP presents a multifaceted solution:

By facilitating participatory workshops and transforming neglected urban spaces into 3D representations within the Minecraft gaming environment, we provide youths with a platform to actively shape their city’s spaces and voice their preferences.
Additionally, our AI-driven learning materials address critical topics such as bias mitigation, misinformation combat, data privacy, generative AI understanding, and advancement of the Sustainable Development Goals (SDGs). These resources equip youth with the necessary tools to navigate the evolving digital democracy effectively.

Join us as we delve into the ramifications of AI on youth participation in democracy, highlighting the opportunities it affords for informed civic engagement. We’ll also underscore CollectiveUP’s pivotal role in nurturing a more inclusive, informed, and engaged generation of citizens.

Caring for robots? Making robots elements of healthcare By Lasse BlondSpecialist at Teknologisk Institut

Robots have long been considered human substitutes in the performance of dirty, dangerous and dull tasks, however, robots are increasingly envisioned as technological solutions to demographic and age-related challenges facing many developed countries. Social robots have already entered elderly care facilities in various European countries – including Denmark.

This talk explores how social robots are received and understood as healthcare technologies. More specifically how a South Korean socially assistive robot was adapted to real-life practices by Danish healthcare practitioners. Based on insights from an ethnographic fieldwork, conducted in the period of 2015-2019, the speaker argues against perceiving robots as clearly defined stand-alone objects. He maintains robots as material elements of social practices reliant on human interaction, maintenance and care.

The talk will discuss the importance of engaging healthcare practitioners in the design of sociotechnical practices that includes robots. User participation in robot system design becomes even more urgent as robots and AI inevitably will merge in advanced robotic systems in our societies.

Lasse Blond works as a specialist at the Danish Technological Institute. He is a former postdoc at the University of California, Berkeley and at Aarhus University, Denmark. He holds a PhD from Aarhus University. His research interests include human-robot interactions, ethnographic explorations of the adaptation of robots in practice, and how concepts of artificial intelligence and intelligence augmentation influence design thinking among robotic system engineers. He was awarded the Internationalisation Fellowship by the Danish Carlsberg Foundation in 2018.

AI in digital citizen participation: challenges and potential solutions By Nicolas Bono RosselloPostdoctoral researcher in Digital and Information management at the University of Namur

The use of Digital Technologies has increased the number of citizens participating in policy-making activities, and thus fostered digital mass participation at different policy levels. The larger scale of these activities and the use of digital channels has brought new challenges such as: information overload and lack of actual discussion and connection between participants and among participants and practitioners (policy makers and public sector professionals). The use of AI in digital participation platforms can help to address these challenges. This talk will discuss the potential of AI to help digital participation to reach its goals of increasing both quality and quantity of public policy and public service outcomes, and to improve the interaction between government and citizens. 

Nicolas Bono Rossello received his M.Eng. degree in Industrial Engineering from the Universidad Politecnica de Valencia (Spain) in 2017 and his Ph.D. in Engineering Sciences and Technology from the Université Libre de Bruxelles (Belgium) in 2022. He is currently working as a postdoctoral researcher in Digital and Information management at the University of Namur.

During his PhD, he worked on the use of new technologies to improve agricultural management and to support the control of epidemics. His current research focuses on the development and support of citizen participation through digital technologies. His research interests include digital citizen participation, artificial intelligence and network-based systems.

Local Respondent By Tanguy De LestreImpact related projects in the Digital, Sustainable and Societal transition (public/private sector) : process facilitation, expert advice, mentoring
14:35 - 16:00Panel discussion: AI, Robotics, and Mobility as a Service By Gianluca BontempiCo-head, Machine Learning Group
Smart Mobility for Sustainable Development Goals: Enablers and Barriers By Alaa KhamisAI & Smart Mobility Technical Leader at General Motors Canada

Smart mobility is a wide umbrella to different systems and services to meet various end-user needs such as safety, seamless accessibility, reliability, and affordability without compromising the collective good of the society and the environment in terms of reduced congestion, reduced emission, and sustainability. Emerging smart mobility systems and services can play instrumental roles in achieving several Sustainable Development Goals (SDGs) adopted by UN Member States. This talk will shed light on the roles of smart mobility as an enabler to sustainable development in developed and developing countries. The talk will also discuss a few barriers that need to be properly handled to achieve the full potential of smart mobility technology as a key enabler to achieve SDGs.

Dr. Alaa Khamis works as AI & Smart Mobility Technical Leader at General Motors Canada. He is the author of “Smart Mobility: Exploring Foundational Technologies and Wider Impacts” book and “Optimization Algorithms: AI techniques for design, planning, and control problems” book. He is also a Lecturer at University of Toronto, an Adjunct Professor at Ontario Tech University and an Affiliate Member of Center of Pattern Analysis and Machine Intelligence (CPAMI) at University of Waterloo. He worked as Autonomous Vehicles Professor at Zewail City of Science and Technology, Head of AI at Sypron Solutions, Associate Professor and Head of Engineering Science Department at Suez University, Associate Professor and Director of Robotics and Autonomous Systems (RAS) research group at German University in Cairo (GUC), Research Assistant Professor at University of Waterloo, Canada, Visiting Professor at Universidad Carlos III de Madrid, Spain; and Université de Sherbrooke, Canada, Visiting Researcher at University of Reading, UK and Distinguished Scholar at University of Applied Sciences Ravensburg-Weingarten, Germany. His research interests include smart mobility, autonomous and connected vehicles, cognitive IoT, algorithmic robotics, humanitarian robotics, intelligent data processing and analysis, machine learning and combinatorial optimization. He published 5 books and more than 180 scientific papers in refereed journals and international conferences. I also filed 47 US patents. For more information, please visit: http://www.alaakhamis.org/

Mobility is becoming increasingly digital: But is everyone on board? By Imre Keserű Mobility and Logistics Research Group at Vrije Universiteit Brussel

Our mobility systems are becoming increasingly digitalised though app-based shared mobility services, smart delivery systems, online ticketing and AI-driven route planning among others. These services promise a better user experience and cost savings for the operators. At the same time, however, there is an increasing digital gap leaving people with low digital skills or no state-of-the-art devices out of the benefits. The challenge is how can we can ensure that the promised benefits of digitalisation reach all sections of the society less familiar with digital services, e.g. older people, people with lower income, people with reduced mobility, persons lacking digital skills or ethnic minorities.

Dr Imre Keseru is professor of urban mobility and deputy director at the Mobilise Mobility and Logistics Research Group at the Vrije Universiteit Brussel (VUB). His research focuses on developing tools and methods to support participatory planning, the evaluation of urban mobility interventions, transport foresight and inclusive and equitable transport. He has been leading several projects on urban mobility including the EU-funded project “Inclusive Digital Mobility Solutions” (INDIMO).

AI for Public Transports By Valérie ZapicoCEO and founder of Valkuren

In the ever-evolving landscape of urban transportation, Artificial Intelligence (AI) stands as a formidable catalyst for transformative change. Our presentation on ‘Artificial Intelligence for Public Transports’ delves into the profound impact of AI on the way we navigate cities. AI’s prowess in optimizing public transport routes is reshaping efficiency and sustainability, leveraging real-time data to minimize travel times, operational costs, and environmental footprints. Moreover, AI’s predictive maintenance capabilities are ensuring the health of transportation vehicles and infrastructure, reducing breakdowns, downtime, and maintenance expenses. Safety also takes center stage as AI systems analyze video surveillance and sensor data, swiftly detecting anomalies and responding to accidents or suspicious activities. Passengers are enjoying newfound convenience through AI-driven ticketing systems and real-time information provided by chatbots and virtual assistants, making their journeys smoother and more enjoyable. While autonomous vehicles are on the horizon, addressing regulatory and public acceptance challenges, AI is making strides in enhancing environmental sustainability by optimizing energy consumption and lowering emissions. This presentation will not only explore the vast potential of AI in public transport but also confront the ethical, cybersecurity, and accessibility considerations that come with it, promising a future where urban mobility is smarter, safer, and more efficient than ever.

Valerie Zapico is CEO/Founder of Valkuren, a company specializing in the development of Big Data solutions including Data Analytics & Artificial Intelligence services, in 2 main areas: manufacturing and mobility. Valkuren’s mission is to help companies optimize their processes and improve their decision-making thanks to new ways of leveraging data, while paying close attention to human links and diversity. This is why, in parallel, Valerie is responsible for the Brussels chapter of the Women in Big Data organization and is involved in various activities linked to the AI4Belgium community.  Named Inspiring Fifty for Belgium in 2020 and in the Top 3 of ICT Women of the year 2023, Valerie is also a speaker, notably at UMons in the “Hands-on AI” certificate program.

MaaS and AI: How should we govern for sustainable mobility? By Eriketti ServouPostDoc fellow from EuroTechPostDoc

This presentation focusses on the role of AI in the governance of Mobility-as-a-Service (MaaS) with regards to sustainability. The presentation contributes a novel approach towards understanding the role of AI in governance processes through MaaS, and implications for social and environmental sustainability. It combines field work and a narrative literature review.

I use MaaS as a case study for mobility governance, as it is regarded as key innovation for sustainable mobility, with data and AI playing a central role. This presentation demonstrates how crucial it is that sustainability goals play a central role in the governance of MaaS, and how there is a real danger for sustainability to not be part of mobility systems.

Using MaaS as a point of departure, the presentation abstracts broader issues of AI in governance with regards to sustainable mobility futures. It demonstrates how the performativity and real-time dimension that algorithms add to governance requires co-creation and collaboration between developers, policymakers, citizens and algorithms to efficiently correspond to sustainability challenges. At the same time, it is crucial that objectives and values of algorithms are decided collectively by human private and public actors to ensure alignment of interests and public value.

Regarding the potential implications of governance with AI for public value and sustainability goals, the presentation shows that a balanced hybrid governance approach between policymakers, industry players and algorithms (as non-human actors) with explicit and dynamic feedback mechanisms is required, so that public and sustainability goals are ensured. This entails preventing governance from degenerating into algocracy (i.e., power lies with algorithms rather than with people) through transparency and democratic deliberation of sustainability objectives. To ensure public value, these objectives need to be tailored to local sustainability priorities and need to manage algorithmic biases that might result to social injustices related to algorithmic values and data representativeness as well as environmental concerns related to environmentally harmful data usage.

Overall, this presentation proposes a conceptualisation of the term hybrid governance that highlights the role of AI as actor, and captures how the interplay between human actors and AI algorithms in MaaS governance creates new accountabilities, opportunities, and risks for sustainability.

Dr. Eriketti Servou is PostDoc fellow from EuroTechPostDoc/ Marie Skłodowska-Curie Actions (MSCA) based in the Technology, Innovation and Society group at the Eindhoven University of Technology (TU/e). Eriketti is a transdisciplinary researcher with a background in the intersection of governance, socio-technical innovation, smart cities, digitalization, and sustainable mobility. She combines academic and practical experience (as an urban planner and smart cities consultant) from 5 different countries: Greece, Denmark, Germany, Scotland, and the Netherlands. 

16:00 - 16:30Coffee Break & Visit of partners’ booths (HORTA HALL)
16:30 - 18:00Panel discussion: AI, Data and Robotics in Public Services towards a new “algocracy” By Hugue BersiniHead, Iridia, Université libre de Bruxelles
Using AI to strengthen VDAB in becoming a trusted advisor By Karolien ScheerlinckManager AI Center of Excellence

Since 2020, VDAB wants to support every citizen in his/her career, rather than solely focussing on the support of job seekers. This means that the number of clients has grown from approximately 200,000 job seekers to about 4,000,000 individuals, the active population of Flanders. The use of innovative solutions that convert VDAB’s data capital into valuable insights plays an important role in achieving these objectives. The use of this type of data is, however, not without risks. The data may be biased or unrepresentative of reality. Consequently, AI models can learn erroneous patterns and make inaccurate predictions. Many AI applications within the HR domain are considered high-risk AI under the AI Act. Therefore, it is crucial to thoroughly identify and, where possible, mitigate the associated risks. To oversee this process, VDAB established an Ethics Council last year, an independent body that provides advice on the responsible and ethical use of data and AI. During this presentation we will give you an introduction to the AI Center of Excellence at VDAB. We will discuss how VDAB is using data and AI to support and improve its core tasks and how we govern this type of innovation at VDAB.

Ms. Karolien Scheerlinck graduated from Ghent University in 2007 with a master’s degree in Bioscience Engineering. She then started a PhD at the department “Mathematical modelling, Statistics and Bioinformatics” with a focus on heuristic optimisation algorithms. She had a first working experience as research engineer at the Belgian railway company Infrabel, where she focussed on optimising the robustness of the railway timetables. In 2014, she started to work for Be-Mobile as research engineer in the innovation lab, where she was in charge of developing algorithms to extract important insights out of traffic data. In 2017, she was recruited by VDAB as a data scientist. She worked in the innovation lab of VDAB and was, among other things, responsible for developing VDAB’s first Artificial Intelligence model “Distance to the labour market”. This model is currently used to prioritise jobseekers according to their probability to find a job. Between 2017 and 2019, the innovation lab expanded quickly and became a more mature team. Karolien played an important role as Data Scientist Lead within this team. In 2020, she became the manager of this team and transformed it into the AI Center of Excellence of VDAB. This team has a track record of award-winning tools, such as our AI vacancy matching engine Jobnet and our orientation tool Jobbereik.

Literacy empowers participation — How can AI empower literacy? By Cerstin MahlowProfessor of Digital Linguistics and Writing Research at the School of Applied Linguistics at the Zurich University of Applied Sciences (ZHAW)

Traditionally, literacy (reading and writing) empowers people to participate in society at large. Inclusion and diversity are both dependent on literacy. With the advent ot the Web and the Internet, reading and writing moved into the digital space. Digital literacy can be understood both as ability and skill to read and write in today’s digital world to be able to participate in society, and as ability and skill to navigate the digital world including information literacy, data literacy, etc, what might be called general digital literacy. With the availability of Artificial Intelligence-based applications, a new literacy seems to emerge: AI literacy as another part of general digital literacy. In this presentation I will argue that the power of AI applications can be leveraged to support digital reading and writing: AI empowers digital literacy to help people to participate in an inclusive and diverse society.

Cerstin Mahlow is professor of Digital Linguistics and Writing Research at the School of Applied Linguistics at the Zurich University of Applied Sciences (ZHAW). Her main research areas are linguistic modeling of writing processes and writing technology. She holds a Master’s degree in Computational Linguistics, Spanish Philology, and Political Sciences from Friedrich-Alexander-Universität (FAU) and a PhD in Computational Linguistics from the University of Zurich (UZH). As specialist in higher ed didactics and e-learning she is also interested in approaches for teaching future skills needed in today’s and tomorrow’s digitally transformed world.

Responsible AI: What Advice is the Norwegian Digitalisation Agency Giving the Public Sector? By Jens Andresen Osberg Senior advisor at the National Resource Center for the Use and Sharing of Data under the Norwegian Digitalisation Agency

In June 2023, the Norwegian Digitalisation Agency introduced guidelines for responsible development and use of artificial intelligence (AI) in the Norwegian public sector. The guidelines aim to assist public organizations in developing and using AI in line with current and future regulations, with a focus on aspects such as explainability, risk assessments and the use of generative AI. The guidelines seek to avoid platitudes and offer practical advice to help organizations understand and implement responsible AI practices. Recognizing the evolving nature of AI and the need for public debate, the guidelines were launched as ‘open beta’. The Digitalisation Agency is actively seeking feedback from academia, private sector, and other public entities to ensure that the guidelines stay relevant. This session will feature a presentation by Jens Andresen Osberg, a senior advisor, lawyer, and software developer from the Norwegian Digitalisation Agency, who will provide an overview of the new guidelines.

Jens Andresen Osberg is a lawyer and software developer, and serves as a senior advisor at the National Resource Center for the Use and Sharing of Data under the Norwegian Digitalisation Agency. He is the author of a publication on the right of access of information on automated decisions under the General Data Protection Regulation (GDPR). Osberg holds a Master’s degree in Law and a Bachelor’s degree in Programming and Systems Architecture, both from the University of Oslo. Jens has worked extensively on Schrems II and cloud services and is currently leading the Norwegian Digitalisation Agency’s efforts on the responsible development and use of artificial intelligence.

Contribution of artificial intelligence in public services By Agnès Ruffat Professor, University of Ottawa
Local Respondent By Fabien MaingainAlderman for Economic Affairs, Employment, Smart City and Administrative Simplification, Brussels City
18:00 - 18:05Cocktail opening speech by Jan Danckaert, Rector, Vrije Universiteit Brussel By Jan DanckaertRector, Vrije Universiteit Brussel
18:05 - 18:10Words from Nathanael Ackerman By Nathanael AckermanGeneral Manager, AI4Belgium
18:10 - 18:15Words from Benoit Macq By Benoit MacqCo-coordinator, TRAIL, Belgium
18:15 - 18:20Words from Sabine Demey By Sabine DemeyDirector, AI Flanders Plan, Belgium
18:20 - 18:45Sponsors Moment By Karen BoersManaging Director
18:45 - 21:00Cocktail and Music
  • Registration & Coffee (08:30 - 09:30)
  • AI & Justice (09:30-10:45)
  • Coffee break (10:45-11:00)
  • AI & Procurement (11:00 - 12:15)
  • Lunch Break (12:15-13:30)
  • Legal Protection Debt in the ML Pipeline (13:30-15:00)
  • Coffee break (15:00-15:15)
  • Robots & Cities (15:15-16:45)
  • Closing Ceremony (16:45-17:00)
08:30 - 09:30Registration, Coffee & Networking (& Dry-runs for speakers)
09:30 - 10:45Panel discussion: AI and Justice By Gregory LewkowiczHead, SmartLaw Hub, Center Perelman
People Can’t Prompt: What Does This Mean for Access to Justice? By Samuel DahanChief of Policy, Deel Inc

There is growing concern regarding the overreliance on generative AI tools for legal advice within the legal profession. However, this is not limited to the legal profession. Non-lawyers have also been relying on generalized AI for their legal questions. This raises crucial considerations, as generative models like the GPT architecture can generate incorrect responses with unwarranted confidence, which is commonly referred to as “hallucinations.” Consequently, it is imperative to develop finely tuned AI systems for legal problems that prioritize transparency, reliability, but also accessibility.

Additionally, formulating effective prompts is challenging for non-experts, especially when it comes to legal inquiries. This project aims to address these concerns related to open access Legal generative AI by investigating the ability of non-lawyers to engage effectively in “end-user prompt engineering.” To achieve this, we will utilize a design probe, which involves a prototype LLM-based chatbot design tool. This tool will support the development and systematic evaluation of prompting strategies.

We propose to develop OpenJustice, an open-access legal generative AI model that is closely supervised to ensure accuracy. This legal AI model will contain guided-prompting functionality to support self-represented litigants engaged in disputes. To achieve this goal, we rely on reinforcement learning based on open and closed datasets. For the open datasets we established an international community of legal professionals and researchers who can contribute to OpenJustice. The closed dataset will be drawn from industry partners’ proprietary data. While proprietary data cannot be disclosed, the two systems will learn from each other and improve the generalized legal model underlying OpenJustice. However, it is important to note that in its early stages, OpenJustice inputs will be drawn from interactions with sophisticated users with a legal background to fine-tune the accuracy of language model and avoid compromising the model with misinformation drawn from non-lawyers’ inputs. Finally, our research will empirically evaluate AI capabilities in law, shedding light on artificial legal reasoning and human-machine interaction.

Samuel Dahan is a professor of law at Queen’s University and Cornell Law School. He served as a Cabinet Member at the Court of Justice of the European Union and Policy Advisor at the EU Commission. Dahan is the Director of the Conflict Analytics Lab, a consortium for AI research on law and conflict resolution. He is the Chief of Policy at the Deel Lab for Global Employment, a policy institute on global work. Dahan is leading the development of MyOpenCourt and OpenJustice – a pair of AI legal systems.

AI and justice: are equality bodies the Guardians of Equality? By Nele Roekens Co-Chair of the Working Group on Artificial Intelligence

In an era where Artificial Intelligence is reshaping every facet of our lives, its role in the realm of justice presents both opportunities and complex challenges. We will delve into the interplay between AI, non-discrimination, and human rights. AI systems, often designed with unbiased intentions, can still perpetuate discrimination. What can we do when faced with unjust results or how can we ensure AI in justice pave the way for a more just society? We will discuss the evolving role of equality bodies and human rights institutions in this regard.

Nele is legal advisor and team lead AI at Unia, the Belgian National Human Rights Institution and Equality Body. She specializes in tech and human rights, most notably non-discrimination.

She is also co-chair of ENNHRI’s – the European Network of National Human Rights Institutions – Working Group on Artificial Intelligence, where she represents ENNHRI at the ongoing negotiations on the [Framework] Convention on AI, human rights, rule of law and democracy at the Council of Europe .

Transforming the guardians of the law? AI in the courtroom By Giulia GentileLecture in Law at Essex Law School

AI is transforming most aspects of human activity, including the how individuals relate to public authority. The use of AI in the public sector is a Janus-faced phenomenon: on the one hand, AI can render the State more agile, open, effective and ultimately protective of individual’s entitlements; on the other hand, AI bears the intrinsic risk of creating a “Super-state”, furthering the powers of state branches at risk of authoritarian deviations. These dynamics emerge acutely in the context of justice systems. Courts across the world are currently facing severe budget cuts while dealing with an increased workload. Automation and AI have the potential to address these shortcomings and deliver more effective, accurate, prompter justice. But AI-driven court systems can also engender an alteration in the current role of judges, by restricting their independence and subjecting them to algorithmic regulation.

Giulia Gentile is Lecturer (Assistant Professor) in Law at Essex Law School. She researches digital regulation, human rights and EU constitutional law. She joined Essex Law School in September 2023, having previously researched and taught at LSE Law School, Maastricht University and King’s College London. She has co-edited three books and authored more than 30 scientific publications. Her academic work was cited, among others, by the UK Parliament, Advocates General at the Court of Justice, the European Banking Authority, the Northern Ireland Human Rights Commission and the Slovenian Constitutional Court. Giulia has provided expert evidence to the European Commission concerning the political rights of the disabled and e-voting procedures and to the UK Parliament on issues concerning the protection of fundamental rights and digital regulation.

Digital journey: the new wave of access to justice? By Marco GiacaloneCo-director, Digitalisation and Access to Justice (DIKE), Faculty of Law and Criminology, Vrije Universiteit Brussel (VUB)

The proliferation of technology has led to a notable transition of certain interpersonal transactions from offline to online. However, traditional methods of conflict resolution, such as litigation and Alternative Dispute Resolution (ADR), have proven inadequate in providing swift and effective justice for online disputes. Despite the discernible distinctions between ADR, Online Dispute Resolution (ODR), and litigation, they all involve human intervention, which entails the inherent risks of human error and cognitive bias.
In this presentation, I will illustrate our digital journey aimed at enhancing access to justice in online disputes. The digital journey focuses on considering the needs of vulnerable parties and mitigating the risks associated with human intervention, aiming to strengthen the resolution of online disputes by leveraging technology and creating a more efficient and accessible system for delivering justice.

Marco is Co-Director of the Research Group on Digitalisation and Access to Justice (DIKE) together with prof. Jachin Van Doninck. He is a Postdoctoral Researcher at the Private and Economic Law department (PREC) and a member of the Research Group on Law, Science, Technology and Society (LSTS) at the Vrije Universiteit Brussel (VUB). He has been involved in several collaborative research projects (co-)funded by the European Union. His work analyses the transformations of law in the field of Access to Justice with a special focus on the fields of Private International Law, Alternative Dispute Resolution (ADR), Online Dispute Resolution (ODR), EU regulations and the application of economic principles into the field of law. His research combines approaches from legal theory, empirical legal studies, and science and technology studies.

Marco is Co-Director of the Research Group on Digitalisation and Access to Justice (DIKE) together with prof. Jachin Van Doninck. He is a Postdoctoral Researcher at the Private and Economic Law department (PREC) and a member of the Research Group on Law, Science, Technology and Society (LSTS) at the Vrije Universiteit Brussel (VUB). He has been involved in several collaborative research projects (co-)funded by the European Union. His work analyses the transformations of law in the field of Access to Justice with a special focus on the fields of Private International Law, Alternative Dispute Resolution (ADR), Online Dispute Resolution (ODR), EU regulations and the application of economic principles into the field of law. His research combines approaches from legal theory, empirical legal studies, and science and technology studies.

Local Respondent By Pierre SculierLawyer and President, French and German-speaking Bar Association (OBFG)
22:45Coffee break & Visit of partners’ booths (M HALL)
11:00 - 12:30Panel discussion: AI Procurement and Sandboxing By Tom LenaertsCo-lead, Machine Learning Group, Université libre de Bruxelles
Guidelines for Procurement of AI Solutions by the Public Sector: Use cases of Brazil, Chile and Colombia By Juan David GutièrezAssociate professor at Universidad de los Andes (Colombia)

The presentation will explain how national and subnational governments in Brazil, Chile and Colombia have issued and implemented guidelines that aim at ensuring that the AI solutions procured by the public sector comply with standards ethical and AI standards.

Associate professor at Universidad de los Andes (Colombia). I teach and research on public policy, public management, and artificial intelligence. I hold a PhD in Public Policy from the University of Oxford. Member of GPAI’s expert group.

Due Diligence in AI Procurement: Integrating a Duty of Care in High Stakes Contexts By Gisele WatersWorking Group Chair - Developing P3119 Standard on Artificial Intelligence Procurement

Voluntary international consensus-based standards can strengthen common procurement approaches with due diligence processes. The developing IEEE-SA P3119 standard for the procurement of AI will be reviewed. The P3119 Working Group built consensus around integrating critical evaluations of the (1) procurement need (public sector problem), (2) vendor governance, (3) solicitation/proposals, (4) contracts, and (5) contract management. The standard’s process model includes both general guidance and specific tools with weighted rubrics. These new P3119 processes do not replace existing requirements. They add due diligence and duty of care parameters to the purchase of AI aimed at making critical decisions on behalf of individuals and communities in high stakes contexts. Exploratory conversations are being held with local government agencies in the UK, India, and Australia for sandbox testing of the standard. 

Dr. Gisele Waters has worked on AI governance and ethical engineering standard development since 2017. She is Chair of the IEEE-Standards Association P3119™ Working Group leading global subject matter experts in drafting an international consensus-based standard for responsible AI procurement. The future standard will provide practical and general guidance (e.g., required activities, tasks, and outcomes) and specific tools and rubrics to cover gaps found currently in common public procurement practices. She is now focused on finding sandbox government collaborations to assess the standard’s process model in procurement projects aimed at increasing efficiency in high-risk contexts such as eligibility automation in human services. She is a human-centered design and social science researcher and also advises digital health start-ups on how to build human-centered data science using AI-enabled analytics and remote patient monitoring. Dr. Waters is recognized globally by “Women in AI Ethics” as one of the 100 Brilliant Women in AI Ethics in 2019 and 2020. In 2023, she also provided the U.S. Congressional AI Caucus with guidance on how to apply volunteer consensus AI governance standards to writing new AI and data privacy legislation.

What do you procure when you procure an AI system? By Frank DignumProfessor in socially aware AI and director of TAIGA

Many systems that have AI elements are a kind of black boxes where the responses to inputs are not known in all instances and also might change over time. Thus what are the guarantees one gets on the functioning of such an AI system? How do you make sure that it will do the right thing all the time? Can you check on forehand what you might be liable for or is it just trusting the supplier? We will explore these questions and indicate some possible answers and ways to guide the procurement of AI systems.

Frank Dignum got his PhD at the VU in Amsterdam in 1989. He has set up a CS department in Eswatini in 1990. Has worked in Lisbon, Eindhoven and Utrecht and since 2019 he is Wallenberg chair in socially aware AI at Umeå University in Sweden. Since 2022 he is the director of TAIGA (the centre for Transdisciplinary AI for the Good of All). He also has an affiliation to Utrecht University and he is an honorary professor of the University of Melbourne. Since 2014 he is a EurAI fellow. He is well known for his work on norms and his theory of social agents is employed in social simulations to support policy making and e-coaching. He has given invited lectures and seminars all over the world. He published 22 books and more than 300 papers.

TBC By Lama SaoumaAI Governance at Global Partnership on AI
What to learn from AI procurement toolkits (…and what should I have done differently when I built one) By Rafael Carvalho de FassioMember of the Office of the Attorney General in Brazil

In brief, I would like to discuss the pros and cons of AI procurement toolkits, based on my own experience at WEF, and present some evidence we came across during our research showing the large amount of in-house developments in Public Sector AI in the US and in Brazil.

Rafael Fassio is a member of the Office of the Attorney General in Brazil and currently acts as the lead counsel for Science, Technology and Innovation at the São Paulo State Government, where he acquired comprehensive experience in public procurement by providing legal advice to the government and its executive branch agencies. As a consultant for the Inter-American Development Bank (IDB), he has recently published a Public Procurement of Innovation Toolkit and a PPI framework for Brazil and LATAM. Currently Rafael is a PhD candidate at the University of São Paulo. Rafael was a fellow of the artificial intelligence and machine learning platform at the World Economic Forum, and has recently co-authored a AI Procurement Guide and a WEF white paper aiming to streamline procurement of AI solutions worldwide.

12:15 - 13:30Lunch Break & Visit of partners’ booths (HORTA HALL)
13:30 - 15:00Panel discussion: Legal Protection Debt in the ML Pipeline By Mireille HildebrandtCo-lead, Law Science Technology and Society, Vrije Universiteit Brussel

The machine learning pipeline kicks off with the collection and curation of training data. Beside the environmental impact of generating, storing and processing ever more data, there are more reasons to question the sustainability of this type of data-driven AI systems. This panel will engage with a report written by Gianmarco Gori for the Human Ai Network that is focused on ground breaking research into human centred AI. The report investigates how a ‘legal protection debt’ builds up in the ML pipeline where upstream design decisions may have major downstream impact on the fundamental rights and freedoms of natural persons. The report confronts the legal duties of the GDPR and the legal framework for open data and open science, highlighting the need for responsible AI from the perspective of legal protection by design. The panel hosts computer scientists, developers and legal scholars, promising an animated debate about arduous issues in data-driven research.

Please consult the report in this link: https://www.cohubicol.com/assets/uploads/hai-net-report.pdf

Infrastructuring of the ML pipeline to avoid accountability By Michael VealAssociate Professor in Digital Rights and Regulation, University College London

AI is increasingly delivered through a supply chain model, with multiple systems querying, integrating, calling and building upon each other. These can span contexts, jurisdictions, types of entities (with varying compliance capacity) and more. Ensuring legal protection — or even analysing these systems for issues of legal compliance — is a serious challenge here. In this intervention, I’ll critically introduce and analyse some of the methods proposed to date, and suggest ways forwards.

Dr Michael Veale is Associate Professor in digital rights and regulation at University College London’s Faculty of Laws. His research focusses on how to understand and address challenges of power and justice that digital technologies and their users create and exacerbate, in areas such as privacy-enhancing technologies and machine learning.

Accountability in the many hands ML pipeline  By Irene KamaraAssistant Professor Cybercrime Law and Human Rights, Tilburg University
Legal Protection Debt in ML training data sets By Gianmarco GoriResearcher at the Law Science Technology and Society, Vrije Universiteit Brussels

I present the findings of the research I have conducted in the context of the Humane-AI project. I discuss how practices of ML datasets creation, curation and dissemination play a crucial role in determining the level of legal protection enjoyed by the legal subjects located downstream ML-pipelines. I illustrate how some structural features of ML-pipelines can give rise to the problem of “many hands” and to the accumulation of various forms of “technical debt”. I argue that, lacking appropriate safeguards, a “Legal Protection Debt” can incrementally build up along the different stages of the pipeline.

I stress the need that the actors involved in ML pipelines adopt a forward-looking approach to legal compliance. This requires overcoming a siloed and modulated understanding of legal liability and paying keen attention to the potential use cases of datasets.

I show how the requirements set out by data protection law can help overcome such challenges, while at the same time facilitating compliance with the standards set by the Open Data and Open Science framework.

Gianmarco Gori is a postdoctoral researcher at the Law, Science, Technology and Society Research Group (VUB). He is currently doing research in the project Counting as a Human Being in the Era of Computational Law (COHUBICOL) and in the Humane AI Net.  He has obtained his PhD in Law from the University of Florence defending a thesis titled “Law, Rules, Machines: Artificial Legal Intelligence and the Artificial Reason and Judgment of the Law”. His current research interests concern the concept of rules in normative practices and computational machines, the epistemological assumptions of data science, and data governance. Gianmarco has previously done research and strategic litigation in the field of labour exploitation, human trafficking, illegal discrimination, immigrants’ and prisoners’ rights. Gianmarco is a member of the Bar Association of Florence, where he has been in practice in the fields of criminal, immigration, and data protection law.  

Misuse of Datasets in Relation to the Legal Protection Debt in the ML Pipeline By Masha MedvedevaAssistant Professor in Legal Technologies at eLaw – Center for Law and Digital Technologies, Leiden University

Masha Medvedeva is an assistant professor in Legal Technologies at Leiden University. Masha has a background in researching limitations of legal technologies and their impact on legal practice, as well as developing her own information retrieval and judgement prediction systems in the legal domain.

A risk-free approach to the curation of training data By Paul LukowiczScientific Director of the German Research Center for Artificial Intelligence (DFKI GmbH)
The marginalisation of academic research due to legal impediments By Johannes TextorAssociate Professor, Institute for Computing and Information Sciences, Radboud University

Johannes Textor is originally from Germany, where he completed his PhD in Theoretical Computer Science in 2011. He moved to the Netherlands for postdoctoral research and joined Radboud University Medical Center in 2015. He is leading an interdisciplinary research group spanning across the medical and Science faculties, and has many years of experience developing computational methods for biomedical and medical research.

Overregulation of scientific research By Tom LenaertsCo-lead, Machine Learning Group, Université libre de Bruxelles
15:00 - 15:15Coffee break & Visit of partners’ booths (M HALL)
15:15 - 16:45Panel discussion: Sustainable Robots and Cities By Greet Van de PerrePostdoctoral Researcher
Can we assess technology’s social and ecological risks to avert detrimental impacts? By Michel Joop van der SchoorResearcher, TU Berlin

Presenting the development process of a prototype of an urban service robot, this presentation will introduce methods used to assess risks and impacts on all dimensions of sustainability as well as the results for this case study.

Studied at Technical University of Berlin, M.Sc. in mechanical engineering 2017, Research assistant (2017-2023) at the TU Berlin, writing PhD about sustainability in product development with a focus on social and ecological dimension in automation/service robots

Hybrid [Life x Machine] Harmony: Designing Interfaces to Sustainably Merge Human with Artificial Beings By Olaf WitkowskiDirector of Research at Cross Labs

In this talk, let us delve into the burgeoning realm of symbiotic interactions between humans and machines. Grounded in rigorous scientific exploration, this discourse unravels the intricacies of crafting cooperative and care-driven interfaces, on the shoulder of the giant recent advances in computer science, but also novel understanding of biology, information science, psychology, and ethics of artificial life. By prioritizing positive, emergent, mutualistic interactions, we chart a path toward fostering harmonious, hybrid living systems. Embracing this convergence, we posit that given the right framework design, the future lies not in humans versus machines, but rather in an enriched collaboration that amplifies the strengths and mitigates the weaknesses of both. This talk is an invitation to envision and shape a future where humanity and technology seamlessly intertwine for the collective betterment, and grounded in the emerging science of diverse and hybrid intelligences.

Olaf Witkowski is a leading expert in AI and an ambassador of artificial life, based in Japan. He is the founding director of Cross Labs, a research institute in Kyoto, Japan, focusing on the fundamental principles of intelligence in biological and synthetic systems. He is the president of the International Society for Artificial Life (ISAL). He is also an executive officer at leading AI company Cross Compass Ltd. (Tokyo, Japan), and a lecturer in information sciences at the Graduate School of Arts and Sciences of the University of Tokyo, and a faculty member at several other institutions. He has cofounded multiple ventures in science and technology on three continents, including YHouse Inc. (a nonprofit transdisciplinary research institute in New York, on the emergence of consciousness in the universe) and the Center for the Study of Apparent Selves in Kathmandu (with a focus on AI ethics and Wisdom traditions). His research focuses on collective, open-ended, and empathic AI paradigms built from a mathematical understanding of intelligence in any substrate.

TBC By Sigrid Brell-CockanChair for Individualized Production, Faculty of Architecture | RWTH Aachen University

Univ. Prof. Dr. techn. Sigrid Brell-Cokcan is the founder and head of the new Chair of Individualized Production (IP) at RWTH Aachen University and currently President of the Association for Robots in Architecture (RiA) and was on the board of euRobotics until 2021. In recent years, she has pioneered the simple application of industrial robots for the creative industries and participated in numerous international research and industrial projects. The Association for Robots in Architecture is a development partner of KUKA and Autodesk and a validated EU research institution within the FP7 program. In 2016, Sigrid Brell-Cokcan founded the new Topic Group for Construction Robotics within euRobotics to contribute to the Multi Annual Roadmap (MAR) for Horizon Europe. In addition, as editor-in-chief, she has launched the new scientific Springer Journal Construction Robotics since 2017. Since 2022, she has been part of the writing team of the living Springer Encyclopedia of Robotics | SpringerLink to the topic Robotics in Construction | SpringerLink. At RWTH Aachen, her new professorship for Individualized Construction Production deals with the use of innovative machines in material and construction production. In order to enable efficient, individualized production from batch size one for the construction industry, new and user-friendly methods of human-machine interaction are being developed. The IP chair employs researchers from various areas of robotics and building production to streamline the necessary digital workflow from initial planning to the production process and to redesign the construction site of the future via intuitive, easy-to-use interfaces. Results on innovative developments in “haptic programming” and “cloud remote control” were presented to the public as finalists in the KUKA Innovation Award 2016 and KUKA Innovation Award on Artificial Intelligence 2021 at the Hannover Messe. This year, she has been nominated for the BAUMA Innovation Award 2022 of Research with the ROBETON project for the robot-supported controlled dismantling of concrete components.

Robots to (out)balance sustainability: Are robots the solution or another problem to sustainability? By Franziska KirsteinSenior Scientific Domain Lead at Blue Ocean Robotics

Robots hold great promise to help solve major challenges related to sustainability. They can support sustainability across all sectors e.g., fighting climate change, increasing the quality of care, improving accessibility and inclusion, helping to recycle, or reducing the use of chemicals in farming. Consequently, there is a rapid increase in the demand for robots, which could ultimately outbalance any positive impact that robotics aims to achieve. In this talk, we will explore the challenges in regards to sustainable design, development, manufacturing and deployment of robots and how these can be turned into opportunities – so that robots become a solution to sustainability instead of another problem.

Franziska Kirstein is Senior Scientific Domain Lead at Blue Ocean Robotics. She has worked in interdisciplinary teams in the field of Human-Robot Interaction since 2012. Her work combines the robotics industry (development and commercialization of robots) with academic research in international collaborations and EU-funded projects; recently with a focus on sustainability in robotics. In 2021, Franziska was selected for the NGI Explorers program with a funded research visit at Washington University in St. Louis, McKelvey School of Engineering. Her work received the “Best Social Innovation Impact” award.

16:50 - 16:55Observers Committee: Students By Léa RoglianoCitizen Panel Coordinator, FARI
16:45 - 16:50Observers Committee: Public administration, Academe, Civic Organization, Industry By Hinda HanedScientific co-director, Civic AI Lab
16:50 - 16:55Closing speech By Karen BoersManaging Director, FARI

Registration is now open!

Save your seat now.

Sponsors Registration

Show your commitment to the AI for the Common Good cause by partnering with us and our unique community!

Contact Us

About FARI AI for the Common Good Institute

FARI is an initiative that aims to develop, study and foster the adoption and governance of AI, Data and Robotics technologies in a trustable, transparent, open, inclusive, ethical and responsible way. Inspired by humanistic and European values, FARI aims at helping to leverage AI-related technologies for societal benefits, such as strengthening and preserving Fundamental Human Rights and achieving United Nations’ Sustainable Development Goals.

Bozar – Brussels, Belgium