Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    April 28, 2024

    April 28, 2024

    April 28, 2024
    Facebook X (Twitter) Instagram
    Trending
    Facebook X (Twitter) Instagram
    Hote NewsHote News
    • Health Science
    • Lifestyle
    • Politics
    • Reel
    • Sports
    • Travel
    • Worklife
    Hote NewsHote News
    Reel

    “The Unasked Question: The Crucial Concern about AI”

    December 13, 2023
    Share
    Facebook Twitter LinkedIn Pinterest Email

    The near implosion of OpenAI, the world leader in the flourishing field of artificial intelligence, has revealed a conflict within the organization and the community at large about the speed at which technology should advance and whether developing it more slowly would help make it safer.

    As a professor of both artificial intelligence and AI ethics, I believe that this problem statement overlooks the crucial question of the type of artificial intelligence we are accelerating or decelerating. In my 40 years of research on natural language processing and computational creativity in the field of AI, I pioneered a series of advances in machine learning that allowed me to build the world’s first large-scale online language translator, which quickly spawned programs like Google Translate and Microsoft’s Bing Translator. It is difficult to argue against the development of translation artificial intelligences. Reducing misunderstandings between cultures may be one of the most important things humanity can do to survive increasing geopolitical polarization.

    However, AI also has a dark side. I witnessed how many of the same techniques – inventions from our natural language processing and machine learning community for benevolent purposes – were used in social networks and search and recommendation engines to amplify polarization, biases, and misinformation in a way that increasingly poses existential threats to democracy. Recently, as artificial intelligence has become more powerful, we have seen how the technology has taken cyber scams to a new level with deepfake voices of colleagues or loved ones used to steal money.

    Artificial intelligences manipulate humanity. And they are about to wield an even more inconceivable power to manipulate our unconscious, something that has only been hinted at with large language models like ChatGPT. The Oppenheimer moment is real.

    However, the “speed versus safety” dilemma is not the only thing distracting us from the important questions and hiding the real threats that loom over us.

    One of the key steps in AI safety circles is “AI alignment,” which focuses on developing methods to align artificial intelligences with human objectives. Until the recent chaos, Ilya Sutskever and Jan Leike, OpenAI’s research leaders on alignment, co-led a research program on “superalignment” that attempts to answer a simple yet significantly complex question: “How can we ensure that artificial intelligence systems that are much smarter than humans fulfill human objectives?”

    However, in AI alignment, once again, there is an obvious issue we do not want to deal with. Alignment… with what kind of human objectives?

    For a long time, philosophers, politicians, and populations have struggled with the thorny dilemmas between different objectives. Instant gratification in the short term? Long-term happiness? Avoiding extinction? Individual liberties? Collective well-being? Limits to inequality? Equal opportunities? Degree of governance? Freedom of speech? Protection against harmful discourse? Acceptable level of manipulation? Tolerance of diversity? Acceptable recklessness? Rights versus responsibilities?

    There is no universal consensus on these objectives, let alone on even more divisive issues such as gun rights, reproductive rights, or geopolitical conflicts. In fact, the OpenAI saga widely demonstrates how impossible it is to align objectives, even among a small group of OpenAI leaders. How can artificial intelligence align with all of humanity’s objectives?

    If this problem seems obvious, why does AI alignment weigh so heavily in the AI community? It is likely because the dominant paradigm in artificial intelligence is to define a mathematical function as the “objective function,” a quantitative or guiding objective that the artificial intelligence should pursue. At all times, the artificial brain of an AI makes thousands, millions, or even billions of small decisions to maximize the achievement of this objective. For example, a recent study showed how a medical AI aiming to automate a fraction of the workload of chest X-rays detected 99 percent of all abnormal chest X-rays, surpassing human radiologists.

    Therefore, AI researchers are strongly tempted to frame everything in terms of maximizing an objective function; we are a typical case of habitual use. To achieve safe AI, we just have to maximize alignment between technology and human objectives! If only we could define a clear objective function that measures the degree of alignment with all of humanity’s objectives.

    In the AI research community, we too often ignore the existential risks that arise from how AI interacts with the complex dynamics of humanity’s chaotic psychological, social, cultural, political, and emotional factors – ones that do not neatly fit into a simple mathematical function.

    AI companies, researchers, and regulators urgently need to address the problem posed by the expected functioning of artificial intelligences in the face of age-old dilemmas between opposing objectives that have not been resolved. They also need to accelerate the development of new types of artificial intelligences that can help solve these problems. For example, one of my research projects includes an artificial intelligence that not only fact-checks information but automatically reformulates it to reduce readers’ implicit biases. Accelerating this work is pressing, especially given the exponential advancement of AI technology today.

    In the meantime, we must decelerate the deployment of artificial intelligences that exacerbate sociopolitical instability, such as algorithms that perpetuate conspiracy theories. Instead, we must accelerate the development of artificial intelligences that help reduce these dangerous levels of polarization.

    And all of us – AI experts, influential figures in Silicon Valley, and major media outlets shaping everyday conversations – must stop sweeping these real challenges under the rug through overly simplistic and ill-framed narratives about the acceleration versus deceleration of AI. We must recognize that our work impacts human beings, and human beings are messy and complex in ways that perhaps an elegant equation cannot reflect.

    Culture matters. AI is already part of everyday life in our society, a fact that will become more pronounced than most people ever imagined. It is already too late to realize that. Let a conflict in a boardroom be our opportunity. It is possible to dream big and slow down the confusion.

    Kai is a professor of computer science and engineering at the Hong Kong University of Science and Technology.

    Post Views: 1
    Related Posts

    April 28, 2024

    April 28, 2024

    April 28, 2024

    Leave A Reply Cancel Reply

    Top Posts

    Controversial Israeli Video Sparks Gaza Hospital Information Battle

    November 14, 2023

    April 28, 2024

    Met Police commander sacked for failing drug test

    November 1, 2023

    European Council President calls for revival of multilateralism

    November 1, 2023
    About Us
    About Us

    We’re impartial and independent, and every day we create distinctive, world-class programmes and content which inform, educate and entertain millions of people in the UK and around the world.

    Email Us: info@hotenews.com

    Facebook X (Twitter) Instagram Pinterest YouTube LinkedIn WhatsApp TikTok Discord Telegram Threads RSS
    Our Picks

    April 28, 2024

    April 28, 2024

    April 28, 2024
    Most Popular

    Controversial Israeli Video Sparks Gaza Hospital Information Battle

    November 14, 2023

    April 28, 2024

    Met Police commander sacked for failing drug test

    November 1, 2023
    © 2025 Hotenews
    • Privacy Policy
    • Get In Touch

    Type above and press Enter to search. Press Esc to cancel.