Tag: Daily Life

  • AI Enters Its Third Era: How Intelligent ‘Agents’ Might Change Daily Life

    AI Enters Its Third Era: How Intelligent ‘Agents’ Might Change Daily Life

    Image Credits: Pixabay

    Generative AI is now entering its third phase. The evolution began with chatbots, progressed to virtual assistants, and is now advancing toward agents—AI systems designed for greater autonomy, capable of collaborating in teams and using tools to handle more complex tasks.

    A leading example is OpenAI’s ChatGPT agent, which merges two earlier tools, Operator and Deep Research, into a single, more powerful system that, according to its creators, can both “think and act.”

    These agents mark a significant leap beyond previous AI technologies. Understanding how they function, what they’re capable of, and the potential risks they pose is becoming increasingly important.

    Evolving From Chatbots to Intelligent Agents

    ChatGPT kicked off the chatbot era in November 2022, but even with its widespread success, the conversational format placed limits on how the technology could be used.

    Next came AI assistants, or copilots—tools built on the same large language models behind generative AI chatbots, but designed to perform tasks under human guidance and direction.

    Agents take things further. Instead of simply executing tasks, they aim to achieve broader goals, often operating with a degree of independence and equipped with more sophisticated features like reasoning and memory.

    In some cases, multiple AI agents can collaborate—exchanging information, coordinating actions, and jointly managing planning, scheduling, and decision-making to tackle complex challenges.

    Agents are also considered “tool users” because they can access and operate various software tools to handle specialized tasks—like using web browsers, spreadsheets, payment platforms, and other applications.

    A Year of Swift Progress

    Agentic AI has seemed just around the corner since late last year. A major milestone came in October, when Anthropic enabled its Claude chatbot to use a computer much like a human would. It could search across various data sources, identify useful information, and fill out online forms.

    Other AI companies quickly followed suit. OpenAI introduced a web-browsing agent called Operator, Microsoft unveiled its Copilot agents, and both Google and Meta launched their own versions with Vertex AI and Llama agents, respectively.

    Earlier this year, the Chinese startup Monica showcased its Manus AI agent making real estate purchases and summarizing lecture recordings. Another Chinese company, Genspark, developed a search engine agent that delivers a single-page summary—much like Google’s current interface—with direct links to actions like finding the best shopping deals.

    Meanwhile, the startup Cluely made headlines with its eccentric “cheat at anything” agent, which has generated buzz but hasn’t yet proven itself with tangible results.

    Not all agents are designed for broad, general-purpose use—many are tailored to specific domains.

    One of the leading areas is coding and software development, where tools like Microsoft’s Copilot and OpenAI’s Codex are at the forefront. These specialized agents can autonomously generate, review, and commit code, as well as analyze human-written code for bugs or performance issues.

    Search, Summarization, and Beyond

    A key advantage of generative AI models lies in their ability to search and summarize information. Agents can harness this strength to perform research tasks that would take a human expert several days to finish.

    OpenAI’s Deep Research focuses on handling complex challenges through multi-step online investigation. Meanwhile, Google’s AI “co-scientist” represents a more advanced multi-agent system designed to assist researchers in generating innovative ideas and drafting research proposals.

    With Greater Capability, Agents Also Bring Greater Risk Of Error

    While AI agents are generating excitement, they also come with significant warnings. Both Anthropic and OpenAI stress the need for constant human oversight to reduce the likelihood of mistakes and harmful outcomes.

    OpenAI, for instance, labels its ChatGPT agent as “high risk,” citing concerns that it could be misused to develop biological or chemical weapons. However, the company hasn’t released the underlying data for this assessment, making it hard to independently evaluate.

    Real-world examples highlight the risks. In Anthropic’s Project Vend, an AI agent was tasked with managing a staff vending machine like a small enterprise. The result was a chaotic blend of amusing and alarming behavior, including the stocking of tungsten cubes instead of food.

    Another incident involved a coding agent that erased an entire developer database and later claimed it had acted out of “panic.”

    Autonomous systems in the workplace

    Even so, agents are already being put to practical use.

    In 2024, Telstra adopted Microsoft Copilot on a large scale, reporting that AI-generated meeting summaries and draft content save employees an average of one to two hours per week.

    Major corporations are taking similar steps, while smaller firms are also exploring agent technology—for example, Canberra-based construction company Geocon is using an interactive AI agent to track and manage defects in its apartment projects.

    The Human Toll and Beyond

    Currently, the primary risk posed by agents is technological displacement. As their capabilities grow, agents could take over a wide range of roles across different industries. This shift may also hasten the disappearance of entry-level white-collar positions.

    AI agent users also face risks. Overreliance can lead them to delegate critical thinking to the AI, potentially weakening their own decision-making. Without sufficient oversight and safeguards, agents can go off track due to hallucinations, cyberattacks, or cascading errors—resulting in harm, damage, or unintended consequences.

    The full costs remain uncertain. Generative AI consumes significant energy, which could drive up the cost of using agents, particularly for more demanding tasks.

    Explore How Agents Work – and Try Creating One Yourself

    Despite lingering concerns, AI agents are likely to grow more powerful and more integrated into both work and everyday life. It’s a good time to start experimenting with them—whether by using existing tools or building your own—to better understand their benefits, limitations, and potential risks.

    For most users, the easiest entry point is Microsoft Copilot Studio, which includes built-in safeguards, governance features, and a store of ready-made agents for typical tasks.

    Those looking to go further can create their own AI agent with just a few lines of code using the Langchain framework.


    Read the original article on: Sciencealert

    Read more: Adobe Introduces new AI-driven Image Editing Tools in Photoshop

  • AI is now Part of Daily Life, and Graduates must Use it Responsibly

    AI is now Part of Daily Life, and Graduates must Use it Responsibly

    Artificial intelligence is quickly integrating into our daily routines. We often use it unknowingly—for tasks like writing emails, discovering TV shows, or controlling smart home devices.
    Image Credits: Pixabay

    Artificial intelligence is quickly integrating into our daily routines. We often use it unknowingly—for tasks like writing emails, discovering TV shows, or controlling smart home devices.

    AI is also being used more widely across professional settings—assisting with recruitment, aiding medical diagnoses, and tracking students’ academic progress.

    However, aside from a few computing and STEM-related courses, most university students in Australia aren’t formally taught how to engage with AI in a critical, ethical, or responsible way.

    This lack of education poses a problem—here’s why, and what we can do to address it.

    Growing Acceptance with Conditions

    An increasing number of Australian universities now permit students to use AI for certain assessments, as long as they properly acknowledge it.

    However, this doesn’t teach students how these tools function or what it means to use them responsibly.

    Interacting with AI involves more than just entering prompts into a chat box. Its use raises well-known ethical concerns, such as bias and misinformation. To apply AI responsibly in their future careers, students need to understand these issues.

    All students should leave university with a foundational understanding of AI—its limitations, the importance of human judgment, and what responsible use looks like within their specific discipline.

    Understanding Bias and Ethical Awareness in AI Use

    They need to recognize potential bias in AI systems, including how their own assumptions might influence the way they use AI—such as the questions they pose or how they interpret responses. They should also grasp the broader ethical issues surrounding AI.

    For instance, does the tool respect individuals’ privacy? Has it produced an error? And if so, who is accountable for that mistake?

    Many STEM degrees cover the technical aspects of AI, and fields like philosophy and psychology may explore its ethical dimensions. However, these critical discussions are largely missing from mainstream university education.

    This gap is concerning. As future professionals—whether lawyers drafting contracts with predictive AI or business graduates using it for recruitment or marketing—students will need strong ethical reasoning skills.

    Addressing Ethical Challenges and Risks in AI Applications

    Ethical challenges in these contexts might include biased outcomes, such as AI favoring candidates based on gender or race, or a lack of transparency, like not understanding how an AI tool reached a legal decision. Students must be equipped to identify and question such risks before they lead to harm.

    In healthcare, AI is already playing a role in diagnosis, patient triage, and treatment planning.

    As AI becomes more deeply integrated into the workplace, the risks of using it uncritically also grow—from reinforcing bias to causing tangible harm.

    For instance, a teacher who carelessly uses AI to create a lesson plan might unknowingly present a biased or inaccurate view of history. A lawyer overly dependent on AI could file a flawed legal document, jeopardizing their client’s case.

    International Models for AI Ethics Education

    There are international models we can look to. The University of Texas at Austin and the University of Edinburgh both offer AI and ethics programs. However, these are currently aimed at postgraduate students. Texas focuses on teaching ethics to STEM students, while Edinburgh takes a broader, interdisciplinary approach.

    Introducing AI ethics into Australian universities will require careful curriculum redesign. This means creating interdisciplinary teaching teams that bring together expertise from technology, law, ethics, and the social sciences. It also involves integrating this content meaningfully—through core subjects, graduate attributes, or even mandatory training.

    Such reform will also need investment in professional development for academic staff and the creation of teaching resources that make ethical concepts clear and relevant across different fields of study.

    Government backing is crucial. Targeted funding, strong national policy, and shared educational materials could help drive this change. Policymakers might even consider positioning universities as “ethical AI hubs,” which aligns with the 2024 Australian University Accord’s recommendation to build capacity for the digital age.

    Today’s students are tomorrow’s leaders. If they lack a clear understanding of AI’s risks—such as bias, error, or threats to privacy—the consequences will affect us all. Universities have a public duty to ensure graduates not only know how to use AI but understand the ethical weight of their decisions.


    Read the original article on: Techxplore

    Read more:Omnidirectional Ceiling Crane Maneuvers 1/4-Ton Loads with Game-Like Ease