No OS vendor wants you to do that, unless you're using a desktop, and then Google wants you to use Chrome. They all want a 30% cut of revenue and/or platform lock-in. They'll rely on dark patterns and nerfing features to push you to their app stores.
Similarly, software vendors want you to use apps for the same reason you don't want to use them. They'll rely on dark patterns to herd you to their native apps.
These two desires influence whether it's viable to use the web instead of apps. I think we need legislation in this area, apps should be secondary to the web services they rely on, and companies should not be allowed to purposely make their websites worse in order to get you on their apps.
Looks to me that the browser version requires the targeted website to be iframed into the malicious site for this to work, which is mitigated significantly by the fact that many sites today—and certainly the most security-sensitive ones—restrict where they can be iframed via security headers. Allowing your site to be loaded in an iframe elsewhere is already a security risk, and even the most basic scans will tell you you're vulnerable to clickjacking if you do not set those headers.
Note that for TOTP the attack is only feasible if the font and pixel-perfect positions on the screen are known:
> The attacks described in Section 5 take hours to steal sensitive screen regions—placing certain categories of ephemeral secrets out of reach for the attacker app. Consider for example 2FA codes. By default, these 6-digit codes are refreshed every 30 seconds [38]. This imposes a strict time limit on the attack: if the attacker cannot leak the 6 digits within 30 seconds, they disappear from the screen
> Instead, assuming the font is known to the attacker, each secret digit can be differentiated by leaking just a few carefully chosen pixels
> I am an app developer. How do I protect my users?
> We are not aware of mitigation strategies to protect apps against Pixnapping. If you have any insights into mitigations, please let us know and we will update this section.
IDK, I think there are obvious low-hanging attempts [0] such as: do not display secret codes in stable position on screen? Hide it when in background? Move it around to make timing attacks difficult? Change colours and contrast (over time)? Static noise around? Do not show it whole at the time (not necessarily so that user could observe it: just blink parts of it in and out maybe)? Admittedly, all of this will harm UX more or less, but in naïve theory should significantly raise demands for the attacker.
[0] Provided the target of the secret stealing is not in fact some system static raster snapshot containing the secret, cached for task switcher or something like that.
Huh. I remember a while ago Google Authenticator hid TOTP codes until you tap on them to reveal them. I remember thinking this was an absolutely stupid feature, because it did not mitigate any real threat and was annoying and inconvenient. Apparently a lot of people agreed because a few weeks later, Google Authenticator quietly rolled that feature back.
I wonder if they were aware of this flaw, and were mitigating the risk.
I would say this is a nice & clever attack vector by calculating from rendering time aka side channeling. Kudos to the researchers though it would take lot of time and capture pixels even for Google authenticator. My worry is now how much of this could be reproduced to steal OTP from messages.
Given to rise of well defined templates (accurately vibe coding design for example: GitHub notification emails) phishing via email, I have literally stopped clicking links email and now I have stop launching apps from intent directly (say open with). Better to open manually and perform such operation + remove useless apps but people underestimate the attack surface (it can come through sdk, web page intents)
It's not exactly a new technique but it's effective for most super targeted attacks, honestly it seems if you were this inclined to be able to get a specific app on the users phone, you might as well just work off the Android app you've already gotten delivered to the users phone. Like Facebook.
Throw a privacy notice to the users "This app will take periodic screenshots of your phone" You'd be amazed how many people will accept it.
> Did you release the source code of Pixnapping?
We will release the source code at this link once patches become available: https://github.com/TAC-UCB/pixnapping
It's not exactly impossible to reverse what's happening here. You could have waited until it was patched but sounds like you wanted to get your own attention as soon as possible.
A patch for the original vulnerability is already public: https://android.googlesource.com/platform/frameworks/native/... and explicitly states in the commit message that it tries to defeat "pixel stealing by measuring how long it takes to perform a blur across windows."
The researchers aren't releasing their code because they found a workaround to the patch.
Then there's a bunch of "no GPU vendor has committed to patching GPU.zip" and "Google has not committed to patching our app list bypass vulnerability. They resolved our report as “Won’t fix (Infeasible)”."
And their original disclosure was on February 24, 2025, so I don't think you can accuse them of being too impatient.
As for "This app will take periodic screenshots of your phone", you still need an exploit to screenshot things that are explicitly excluded from screenshots (even if the user really wants to screenshot them.)
If genuine, this finger pointing is an interesting approach to a security vulnerability. Last time I read such arguments was 20 years ago from a different firm in California and it was not to their advantage.
I'm no expert in security, but I'm guessing if you install an app on a Windows Desktop computer it can do more chaos faster and more discreetly than pixnapping can on Android.
If you use the same password on two websites, any one of the two websites can use it to log you it in the second website (if it doesn't have an extra layer of security).
On paper security is pretty weak yet in practice these attacks are not very common or easy to do.
Bunnie's Precursor? It sounds cool, but it's also expensive as fuck. If you thought $100 for a graphing calculator was a ripoff, the Precursor is a similar form factor and level of computational power, but costs $1000 and can't be used in maths exams.
From reading comments on hn over the past couple of years, I'm disappointed how terrible the security practices and knowledge has become. All of this stuff is about to get a lot worse with generative AI.
There are complaints on this story, and on the recent one about the fsf phone project about how inconvenient it is to not be able to access banking apps on your mobile phone. I can't be bothered to enter my banking password every 30 minutes on my desktop! What, I'm supposed to have two phones?
The first thing someone is going to do when they steal your phone (after they saw you enter your password in public) is open your banking and money apps and exfiltrate as much as they can from your accounts. This happens every single day. None of those apps should be installed or logged in on your phone. Same goes for 2FA apps. That's like traveling with Louis Vuitton luggage which is basically a "steal me" sign.
That's the most basic stuff for people who aren't a CEO of a company that is in the crosshairs of state sponsored espionage attacks.
The problems with "bare bones secure OS" device remain the same from a physical access standpoint: social engineering, someone sees your password, steals the device. But otherwise, yes, the devices you install a bunch of spyware/adware games on and take to bars should not be the ones you are doing your banking, 2FA, work, etc on ever.
In the previous discussion everyone seems happy it’s been patched and not to worry (even though androids mostly don’t run anything like the latest android)
But in this write up they say the patch doesn’t work fully
The bigger issue is the sidechannel that exists which leaks information from secure windows, even from protected buffers, potentially including DRM protected content.
While these blurs make the sidechannel easier to use as it provides a clear signal, considering you can predict the exact contents of the screen I feel like you could get away with just a mask.
Not a phone designer, but could we imagine a new class of screen region which is excluded from screen grab, draw over and soft focus with a mask, and then notification which do otp or pin subscribe to use it?
App developers can already dynamically mark their windows as secure which should prevent any other app from reading the pixels it rendered. The compositor composites all windows, including secure windows and applies any effects like blur. No apps are supposed to be able to see this final composited image, but this attack uses a side channel they found that allows apps on the system to learn information about the pixels within the final composition.
The attack needs you to be able to alter the blur of pixels in a secure window; this could be forbidden. A secure window should draw 100% as requested or not at all.
The blur happens in the compositor. It doesn't happen in the secure windows.
>A secure window should draw 100% as requested or not at all.
Take for example "night mode" which adds an orange tint to everything. If secure windows don't get such an orange tint they will look out of place. Being able to do post processing effects on secure windows is desirable, so as I said there is a trade off here in figuring out what should be allowed.
> Take for example "night mode" which adds an orange tint to everything. If secure windows don't get such an orange tint they will look out of place. Being able to do post processing effects on secure windows is desirable, so as I said there is a trade off here in figuring out what should be allowed.
These sort of restrictions also often interfere with accessibility and screen readers.
Either the screen reader is built into the OS as signed + trusted (and locks out competition in this space), or it's a pluggable interface, that opens an attack surface to read secure parts of the screen.
Right but night mode is built into the OS so you can easily make an exception (same for things like toasts). Are there use cases where you need a) a secure window, and b) a semi-transparent app-controlled window on top of it?
Things like this make me wonder if the social media giants use attacks like these to gain certain info about you and advertise to you that way.
Either that or Meta's ability to track/influence emotional state by behaviour is that good that they can advertise to me things I've only thought of and not uttered or even searched anywhere.
>"It looks like the IT security world has hit a new low," Torvalds begins. "If you work in security, and think you have some morals, I think you might want to add the tag-line: "No, really, I'm not a whore. Pinky promise" to your business card. Because I thought the whole industry was corrupt before, but it's getting ridiculous," he continues. "At what point will security people admit they have an attention-whoring problem?"
Do not install apps. Use websites.
Apps have way too much permissions, even when they have "no permissions".
Similarly, software vendors want you to use apps for the same reason you don't want to use them. They'll rely on dark patterns to herd you to their native apps.
These two desires influence whether it's viable to use the web instead of apps. I think we need legislation in this area, apps should be secondary to the web services they rely on, and companies should not be allowed to purposely make their websites worse in order to get you on their apps.
I don't own or carry a smart phone. I'm still able to get by without one, but just barely.
I only used once, in February, so hopefully they didn't break it since then.
https://www.hertzbleed.com/gpu.zip/
> The attacks described in Section 5 take hours to steal sensitive screen regions—placing certain categories of ephemeral secrets out of reach for the attacker app. Consider for example 2FA codes. By default, these 6-digit codes are refreshed every 30 seconds [38]. This imposes a strict time limit on the attack: if the attacker cannot leak the 6 digits within 30 seconds, they disappear from the screen
> Instead, assuming the font is known to the attacker, each secret digit can be differentiated by leaking just a few carefully chosen pixels
IDK, I think there are obvious low-hanging attempts [0] such as: do not display secret codes in stable position on screen? Hide it when in background? Move it around to make timing attacks difficult? Change colours and contrast (over time)? Static noise around? Do not show it whole at the time (not necessarily so that user could observe it: just blink parts of it in and out maybe)? Admittedly, all of this will harm UX more or less, but in naïve theory should significantly raise demands for the attacker.
[0] Provided the target of the secret stealing is not in fact some system static raster snapshot containing the secret, cached for task switcher or something like that.
I wonder if they were aware of this flaw, and were mitigating the risk.
Given to rise of well defined templates (accurately vibe coding design for example: GitHub notification emails) phishing via email, I have literally stopped clicking links email and now I have stop launching apps from intent directly (say open with). Better to open manually and perform such operation + remove useless apps but people underestimate the attack surface (it can come through sdk, web page intents)
Throw a privacy notice to the users "This app will take periodic screenshots of your phone" You'd be amazed how many people will accept it.
> Did you release the source code of Pixnapping? We will release the source code at this link once patches become available: https://github.com/TAC-UCB/pixnapping
It's not exactly impossible to reverse what's happening here. You could have waited until it was patched but sounds like you wanted to get your own attention as soon as possible.
The researchers aren't releasing their code because they found a workaround to the patch.
Then there's a bunch of "no GPU vendor has committed to patching GPU.zip" and "Google has not committed to patching our app list bypass vulnerability. They resolved our report as “Won’t fix (Infeasible)”."
And their original disclosure was on February 24, 2025, so I don't think you can accuse them of being too impatient.
As for "This app will take periodic screenshots of your phone", you still need an exploit to screenshot things that are explicitly excluded from screenshots (even if the user really wants to screenshot them.)
P.S.: where did you see this discussion?
If you use the same password on two websites, any one of the two websites can use it to log you it in the second website (if it doesn't have an extra layer of security).
On paper security is pretty weak yet in practice these attacks are not very common or easy to do.
On desktop, apps aren't sandboxed. On mobile, they are. Breaking out of the sandbox is a security breach.
On desktop, people don't install an app for every fast food chain. On mobile, they do.
We have this tendency of adding more and more "features", more and more functionality 85% of which nobody asked for or has use for.
I believe that there will be a market for a small, bare bones secure OS in the future. Akin to how freeBSD is being run.
https://www.bunniestudios.com/blog/2020/introducing-precurso... (currently down, might be up later)
There are complaints on this story, and on the recent one about the fsf phone project about how inconvenient it is to not be able to access banking apps on your mobile phone. I can't be bothered to enter my banking password every 30 minutes on my desktop! What, I'm supposed to have two phones?
The first thing someone is going to do when they steal your phone (after they saw you enter your password in public) is open your banking and money apps and exfiltrate as much as they can from your accounts. This happens every single day. None of those apps should be installed or logged in on your phone. Same goes for 2FA apps. That's like traveling with Louis Vuitton luggage which is basically a "steal me" sign.
That's the most basic stuff for people who aren't a CEO of a company that is in the crosshairs of state sponsored espionage attacks.
The problems with "bare bones secure OS" device remain the same from a physical access standpoint: social engineering, someone sees your password, steals the device. But otherwise, yes, the devices you install a bunch of spyware/adware games on and take to bars should not be the ones you are doing your banking, 2FA, work, etc on ever.
But in this write up they say the patch doesn’t work fully
While these blurs make the sidechannel easier to use as it provides a clear signal, considering you can predict the exact contents of the screen I feel like you could get away with just a mask.
>A secure window should draw 100% as requested or not at all.
Take for example "night mode" which adds an orange tint to everything. If secure windows don't get such an orange tint they will look out of place. Being able to do post processing effects on secure windows is desirable, so as I said there is a trade off here in figuring out what should be allowed.
That seems well worth the trade to me.
Either the screen reader is built into the OS as signed + trusted (and locks out competition in this space), or it's a pluggable interface, that opens an attack surface to read secure parts of the screen.
Either that or Meta's ability to track/influence emotional state by behaviour is that good that they can advertise to me things I've only thought of and not uttered or even searched anywhere.
>"It looks like the IT security world has hit a new low," Torvalds begins. "If you work in security, and think you have some morals, I think you might want to add the tag-line: "No, really, I'm not a whore. Pinky promise" to your business card. Because I thought the whole industry was corrupt before, but it's getting ridiculous," he continues. "At what point will security people admit they have an attention-whoring problem?"
https://www.techpowerup.com/242340/linus-torvalds-slams-secu...