This year’s edition of TNW Conference is right around the corner — along with summer, we hope. We have a spectacular line-up of speakers and we cannot wait to share the special atmosphere that sets TNW apart from other tech conferences out there with you.
Leading up to the event, the editorial team will be sharing some of their highlights and what not to miss from the TNW flagship conference taking place in Amsterdam on 20 and 21 of June. We hope you are as excited as we are and cannot wait to see you there.
One of the sessions I am really looking forward to attending this year is the panel “Humanism: The Philosophical Debate for AI Ethics.” It will take place between 12:30 and 13:10 on the main stage on Day 1.
Panelists include Aliya Grig, founder of Evolwe AI, which claims to be building AI “closest to humans in terms of providing empathy, reasoning, and cognitive skills,” founder and CEO of “mad science incubator” Socos Labs Dr Vivienne Ming, the “father of AGI” and CEO of SingularityNET Ben Goertzel, CEO of the Consumer Technology Association Ben Shapiro, and Ekaterina Almasque, general partner at VC OpenOcean.
Shaping ethical AI systems
As anyone who has listened to the TNW podcast on occasion will know, I am incredibly fascinated by theories of consciousness and the mind — in fact, that is how I came to be interested in the concept of AI to begin with. (I also once wrote a somewhat pretentious thesis in ethics and moral philosophy at university.)
Human intelligence and cognition arise from interactions and processes that are far more complex than systems such as large language models. Therefore, it is highly unlikely that particular path will lead to human-level intelligence.
However, along with other types of systems, they will continue to shape our world in ways we perhaps can not even yet perceive, becoming more and more ubiquitous until AI becomes something like a utility — like electricity, buzzing away in the background.
As such, it is entirely essential that those training and implementing AI do so ethically and without bias, so that all of society can benefit from the technology.
I am eager to hear the panelists’ perspectives on what kinds of ethical frameworks and safeguards we need to implement to ensure that AI, general or otherwise, is fair and equitable. Who should decide what these should be, and who should be responsible for unethical or biased applications of AI systems?
Come join the debates shaping our future.
If you’re interested in attending the tech festival (and saying hi to our editorial team) we’ve got a special offer for our readers. At the ticket checkout, use the code TNWXMEDIA to get 30% off your business pass, investor pass, or startup packages (Bootstrap and Scaleup).
Post a Comment