EMA IT & Data Management Research, Industry Analysis & Consulting

Google’s New Android Developer Verification Will Increase Device Security Risk for Power Users

Oct 6, 2025 12:10:49 PM

Google's upcoming Android Developer Verification Program (ADVP), which launches fully in 2026, requires all developers to register and verify their identities before their applications can be installed on certified Android devices.

The ADVP mandates developers to create an account in the new Android Developer Console and submit extensive personal information. Developers must provide their legal name, physical address, email, and phone number. Many must also supply government-issued identification, which supports verification. Organizations face additional requirements, such as providing a DUNS (Data Universal Numbering System) number. Once verified, developers must register the unique identifiers for all their applications. This process links each app's cryptographic signature to a verified real-world entity, establishing a traceable chain of custody for all software distributed on certified devices.

Google says that this constitutes a necessary step to combat the rising tide of malware and financial fraud, citing internal research. The research suggests that apps sideloaded from the internet are over 50 times more likely to contain malware than those from the Google Play Store. The company argues that this verification process is akin to an "ID check at the airport," aimed at making it more difficult for malicious actors to operate anonymously.

However, what Google claims to do for the sake of security will, in fact, result in less security overall for power users who want to continue using unverified apps.

Forcing Open Source Apps into Abandonware

The ADVP raises a significant concern about its impact on developer anonymity. For many independent developers, particularly those involved in privacy-focused projects or operating in restrictive political environments, anonymity remains crucial for personal safety. The mandate to link real-world identities to applications could expose these developers to legal risks and physical danger, potentially stifling privacy-critical development and pushing it outside mainstream platforms.

The Free and Open-Source Software (FOSS) community, particularly in projects such as F-Droid, faces an existential threat. F-Droid, a long-standing repository for FOSS Android apps, operates on principles of decentralization and anonymity. The ADVP's requirement for verified developer identities is fundamentally incompatible with F-Droid's model, as it cannot compel its many upstream developers to undergo Google's verification process. If F-Droid were to register applications under its own organizational identity, it would, in effect, claim exclusive distribution rights, which contradicts the core tenets of FOSS. This could mean that existing F-Droid apps on users' devices would no longer receive updates. If any security vulnerabilities are found in those apps, users won't receive those updates, increasing the device's attack surface.

Unfortunately, that is just the beginning of security concerns from this shift. This will be even more damaging to non-technical users who will undoubtedly seek out methods to "jailbreak" their Android devices.

Replacing Android with Unofficial Builds and Increasing Attack Surface

The security architecture of Android structures layers, beginning with the hardware and extending through the kernel and application framework. Essential components include the immutable hardware root-of-trust, the bootloader, the Linux kernel, proprietary vendor components, the Android Open Source Project (AOSP) framework, and the Mandatory Access Control layer, primarily enforced by SELinux. The attack surface encompasses all accessible and exploitable code paths, including user applications, system services, inter-process communication mechanisms, the network stack, and hardware interfaces managed by vendor drivers. Android's built-in security features, such as its robust Linux kernel foundation and application sandboxing, design isolation for applications and protect system resources.

The ADVP's technical scope limits itself to "certified Android devices," which are those preinstalled with Google Mobile Services and the Play Protect framework. Uncertified devices running custom ROMs or modified AOSP builds without these components may not be affected. This means that more users may adopt unofficial Android builds on their devices to maintain the ability to install unverified apps. In the process, these users will likely increase their attack surface and open vulnerabilities.

The most significant increase in attack surface for most unofficial Android builds stems from the compromise of the hardware security chain of trust, necessitated by unlocking the bootloader. Modern mobile hardware security begins with the hardware root of trust (RoT), an immutable foundation within the system-on-chip or a dedicated security module that stores cryptographic keys to secure the boot process. This foundation remains inherently trusted and is designed to resist malware. Android Verified Boot (AVB) extends this trust throughout the software stack, cryptographically verifying the integrity of boot and system partitions to ensure software hasn't been tampered with since the OEM signed it.

For almost all devices, installing a custom operating system requires unlocking the bootloader, which disables the OEM's chain of trust and renders AVB ineffective for validating OS integrity. This removal of a primary physical barrier shifts the security reliance from hardware enforcement to software policies like SELinux and application sandboxing. Some advanced, security-hardened ROMs, such as GrapheneOS, design themselves for devices like Google Pixels that allow users to re-lock the bootloader after installing a cryptographically signed custom OS image. This capability restores hardware-backed integrity checking, mitigating the primary physical risk associated with unofficial builds. Distributions supporting relockable and verified bootloaders significantly improve their physical security posture.

An unlocked bootloader introduces a substantial risk of persistence. If an attacker gains physical access, they could potentially flash malicious boot images or recovery environments, enabling the installation of persistent rootkits or malware that survive data wipes—a scenario known as the "Evil Maid" threat model. In an unlocked state, kernel exploits, or privilege escalation vulnerabilities gained remotely can have far more devastating consequences. With write access to arbitrary files, an attacker could implant persistent malware at the firmware or bootloader level.

The custom ROM ecosystem also presents a varying spectrum of patch velocity. While some security-focused projects like GrapheneOS incorporate monthly Android security patches with efficiency comparable to Google's own Pixel devices, most general-purpose distributions, such as LineageOS, prioritize broad hardware compatibility. They rely on community maintainers, which leads to variable patch delivery. More critically, the maintenance lifecycle of many custom ROMs is volatile, with developers sometimes ending support, leaving devices with unmitigated vulnerabilities. For devices that have reached their manufacturer's End-of-life (EOL) date, the certified build's attack surface becomes catastrophically high as vulnerabilities remain unpatched.

Additionally, despite transparency benefits, open-source communities are susceptible to supply chain attacks. The risk profile for custom ROMs ties closely to the size and maturity of the development team. Smaller, niche projects often lack the rigorous community oversight of larger projects, presenting a higher risk of malicious code introduction or build infrastructure compromise going unnoticed. Mitigating this risk requires prioritizing large, well-established, and actively audited projects.

Simply put, custom ROMs carry a lot of risk, and while the larger projects are likely better maintained, unlocking the bootloader and patch velocity will always be a concern.

Google Isn't Disabling Sideloading; They're Just Adding Extra (Dangerous) Steps

Google frames the ADVP as a security measure. However, the primary technical bypass for unverified applications on certified devices will be through the Android Debug Bridge (ADB). While this allows developers to debug and test applications locally, it is not a practical solution for general consumers due to its technical complexity, requiring familiarity with command-line tools and PC setup.

The reliance on ADB for bypass introduces its own set of security paradoxes. To install unverified apps, users may need to enable developer options and USB debugging, increasing the device's attack surface and making it vulnerable to unauthorized access. This is particularly concerning if users use ADB over unsecured networks or if users download untrusted ADB tools. The risk shifts from malware within an app's content to system-level exploitation via the debug interface, posing potentially deeper consequences for compromised devices.

The ADB system operates on a client-server architecture, with the critical component being the daemon (adbd) running on the Android device. This daemon executes commands from the connected host, typically under the highly privileged "shell" user context. This elevated privilege allows for actions that are normally restricted, such as direct access to user data and system settings. While ADB over USB relies on physical connection and host computer security, ADB over Wi-Fi exponentially increases the risk by exposing the device to entire networks.

ADB has incorporated an RSA key exchange mechanism for authentication, and this process requires the user to explicitly authorize a connecting computer by unlocking the device and approving a prompt displaying the host's public key fingerprint. Once authorized, the host's public key is stored persistently on the device, allowing for future connections without repeated prompts. However, this persistence creates a long-term security vulnerability. If a developer's workstation, which hosts the private ADB key, is compromised, it can serve as a perpetual attack vector against any Android device it has previously debugged, until that authorization is manually revoked. The security of this system is also contingent on the physical integrity of the device and the user's awareness of the authorization prompt. It offers no more protection than Microsoft’s “Do you want to allow the following program to make changes to this computer?” prompt, which is easily bypassed by simple social engineering tricks.

The security implications escalate significantly when users enable ADB over Wi-Fi, also known as TCP/IP mode. This configuration causes the adbd daemon to listen on a specific network port, making the device accessible to any other device on the same network. This eliminates the physical barrier of USB tethering and opens the device to a much wider attack surface. Devices running ADB over Wi-Fi can be scanned and exploited by attackers on unsecured networks. In some cases, people have found them to be exposed directly to the public internet. This global exposure is considered a critical security vulnerability, as automated scanning campaigns actively seek out and abuse these accessible ADB endpoints. Standard network defenses can be insufficient, as internal network segmentation may be weak, allowing attackers to pivot from a compromised device on the same network to one with ADB enabled.

Advanced exploitation techniques highlight the severe impact of ADB vulnerabilities. A malicious application installed on a device can exploit ADB over Wi-Fi misconfigurations through localhost connections. By intercepting and overlaying the RSA authentication prompt, the malicious app can trick the user into authorizing its own connection, achieving a significant local privilege escalation from a sandboxed environment to the privileged shell context. This allows the app to execute system-level commands. Moreover, ADB access is a known method for bypassing critical security features like Factory Reset Protection (FRP), which is designed to prevent the use of lost or stolen devices. This bypass capability demonstrates that an active ADB connection can override higher-level security controls, including those managed by enterprise mobile device management (MDM) solutions. ADB also facilitates malware persistence, providing a pathway for attackers to gain root privileges, install rootkits, and establish persistent command and control channels. Attackers can use it to disable or modify device-side security monitoring applications as well.

A Few Final Thoughts

One of the most important lessons I learned in my 15+ years as a cybersecurity practitioner is that the more friction you introduce to a user in the name of security, the more likely that user is going to attempt to bypass security completely, counteracting your original intentions. Many argue against the ADVP, citing consumer choice, anti-competitive concerns, and developer anonymity. However, none of these concerns are as important as the fact that Google’s attempts to make Android “more secure” are, in fact, increasing the risk for Android users who want to maintain their current ability to install unverified apps. By eliminating the low-friction sideloading channel for unverified apps, the policy forces users who rely on open-source software, niche utilities, or privacy-critical applications into two high-risk operational maneuvers. They must either adopt unofficial Android builds (custom ROMs) or enable dangerous debugging features. In most cases, this requires unlocking the bootloader and permanently sacrificing the hardware-backed security of Android Verified Boot and the Hardware Root of Trust, thereby opening the device to devastating, persistent "Evil Maid" attacks. Otherwise, they must use the Android Debug Bridge (ADB) as the primary installation bypass. Relying on ADB means they must constantly enable high-privilege debugging settings, which turns the device's administrative control plane into a persistent vulnerability.

Overall, the ADVP exchanges the potential for application-level malware for a dramatic, systemic increase in the security attack surface at the operating system and network layers, fundamentally undermining the security of the very users the program claims to protect.

Ken Buckler

Written by Ken Buckler

Kenneth Buckler, CASP, is a research director of information security/risk and compliance management for Enterprise Management Associates, a leading industry analyst and consulting firm that provides deep insight across the full spectrum of IT and data management technologies. Before EMA, he supported a Federal agency’s Enterprise Visibility program, providing security insights and compliance trending for the agency’s national network of computers and devices. He has also served in technical hands-on roles across multiple agencies in the Federal cyber security space and has published three Cyber Security books. Ken holds multiple technical certifications, including CompTIA’s Advanced Security Practitioner (CASP) certification.

  • There are no suggestions because the search field is empty.

Lists by Topic

see all

Posts by Topic

see all

Recent Posts