

There are no Internet Browsers that cannot be tracked, or are there?.Lots of work to be done though, and had to focus on couple other things first before I can get back to the project. Tunnel protocols are all generically implemented, DNS exfiltration, HTTPS smuggling, ICMP tunnels, and pwnat work already pretty failsafe. Relaying and scattering traffic works automatically, so that no correlation of IPs to scraped websites can be done by an MITM. Works for most cases, had to implement a couple of breakout tunnel protocols though, so that peer discovery works failsafe when known IPs/ASNs are blocked. So peers are basically pinned client certificates in your local settings. Global peer discovery is solved via mapping of identifiers via the reserved TLD, and via mutual TLS for identification and verification. download streams are shared among local peers. It's using multicast DNS to discover neighboring running instances and it has an offline cache first mentality, which means that e.g. Most of the things you mentioned are implemented in the "Browser" that I've built. The Iran Firewall: A preliminary report.But golang probably makes handling arbitrary network data a huge pain, so it's kinda useless for failsafe html5 parsing. If I would rewrite a browser engine today, I'd probably go for golang. I added my non-complete opensnitch ruleset to my dotfiles for others to try out. It does it also in Librefox and all the *wolf profile variants, just use a local eBPF firewall to verify. The config settings in Firefox don't change shit anymore, and it will keep requesting the tracking domains. And neither does Firefox that literally goes rogue in an endless loop of requests when you block their tracking domains. I'm sorry to be blunt here, but all that user privacy valueing electron bullshit that uses embedded chrome in the background doesn't cut it anymore. I called it RetroKit, and I documented what kind of features in WebKit were already usable for tracking and had to be removed. I myself started a WebKit fork a while ago but eventually had to give up due to the sheer amount of work required to maintain such an engine project. it needs to stop monkey-patching out ("stubbing") the APIs that are compromising user privacy, and start removing those features. it needs a failsafe browser engine, which doesn't give a flying damn about WebRTC and decides to actively drop features. it needs a request scattering feature, so that the requests for a specific website don't get proxied through the same nodes/paths it needs a trailing non-zero buffer, randomized by the size of the payload, so that stream sizes and durations don't match From a technological point of view, TOR still has a couple of flaws which make it vulnerable to the metadata logging systems of ISPs:
