There’s probably lots of situations in anything but rural environments where open Wi-Fi networks are either already available, or highly likely to be. Dorms, apartments, anything like that becomes a mess.
There’s probably lots of situations in anything but rural environments where open Wi-Fi networks are either already available, or highly likely to be. Dorms, apartments, anything like that becomes a mess.
I’m really sorry, but all I have is confidence and a little surface-level knowledge that self-hosting is possible.
What happens when there’s an open Wi-Fi connection close enough for the smart TV to connect to?
Ente and Immich are both projects like that, they’re both trying to be drop-in replacements for Google Photos. Immich requires you to self-host, and Ente makes it an option that doesn’t make it look too daunting.
The pricing is weird: Immich (like other FUTO sponsored projects) has a WinRAR-style license that requires you to pay them for hosting an instance, but only once, and you can technically ignore it. Ente, meanwhile, allows you to use their apps with third-party instances without charging for the privilege.
I would definitely recommend checking out either. I held out for a long time, because I thought image hosting might not be useful (and because deleting local photos is still a bit of a crapshoot, both backup-wise and functionality-wise) but it turns out to be pretty nifty.
If Mozilla must throw money at AI, this is the way to go… I guess. Ente is trying to build a Google Photos replacement that translates image contents into searchable text while being fully end to end encrypted (read: as private as it gets), after all. Ente also allows you to fully self-host, so you can get these benefits without even trusting their servers.
Out of the $65 million Mozilla has committed to throwing at for-profit and AI companies (that’s roughly 9.4 Mitchell Baker Salaries), $100,000 is a drop in the bucket, only 1.44% of the size of a Mitchell Baker Salary.
I remain skeptical about Mozilla’s commitment to “open source AI models” when I haven’t seen a single blob of AI data released that is reproducible or open source. They are black boxes, and black boxes so closed that not even the people that created them could tell you what’s inside of them (unless we count the blood, sweat, and tears of the underpaid workers behind it).
Full disclosure: I am a paying Ente subscriber
@[email protected] would you consider submitting this picture in Mozilla’s latest Firefox fan art competition? So far, there aren’t any entries.
To add to this, I’m not fine with this either. And I don’t think Mozilla should assume the consent of people, either.
Well, I don’t foresee any downsides. Hopefully they can continue making an incredible browser and operating system respectively.
Language removed so I can elaborate:
I don’t believe Google sets aside the money made through Firefox exclusively for Firefox. (If you believe this is the case, good luck demonstrating it, I guess.) Google’s money probably goes into a big pool named “ad revenue”, and that pool is probably filled disproportionately with Google’s own Chrome users.
Again, Google is doing to Mozilla what Microsoft did for Apple: hurling money at them with the facade of an exchange of something, in order to stave off regulators.
This isn’t the first time a company funded its competitor to avoid monopoly accusations. Microsoft did it to Apple. So it’s not like Google is simply returning the wealth Mozilla is providing it out of some generosity. Maybe they are, but I find the desire to remain out of the clutches of regulators to be an equally compelling explanation.
And given the fact that (despite Mozilla’s best attempts to the contrary) Firefox users tend to be on the nerdy and privacy oriented side, and they have both the proclivity and capacity to block ads, I imagine that Google probably pulls from the revenue sucked out of Chrome users rather than Firefox ones. But that’s just a theory, a browser theory.
I’m not a fan of Mozilla accepting money from Google, but it’s absolutely preferable to having a clause in their privacy policy that allows them to sell geolocation data directly to advertising partners. Pre-2023, I don’t think they did that.
It’s interesting they use code names but never really describe what the features should be. I think they’re changing the UI of Incognito Mode too, or at least doing something labeled under “Felt Privacy.”
I don’t expect it to be user friendly, but surely there is a description of it around somewhere?
Ironically, I don’t always agree with the Mozilla brass, but when the CTO said
We consider modal consent dialogs to be a user-hostile distraction from better defaults.
… I kind of agree, in just this on instance. What a dialog to present.
You must be one of the few that do not believe they should diversify
This is an incorrect read of what I said. I said I don’t buy the assumption that Mozilla is diversifying into anything good:
If you believe this, you need to deal with the cognitive dissonance that comes from this, and explain the basis for why you believe in them while simultaneously believing in the opposite of them.
Unlike you, I provided explicit examples of bad diversification, where are your examples of the good?
You are surprised that you are supposed to back up your opinions and bring references to a discussion.
User-unique gets collected, and then the user-unique data sent to a remote server.
Only on the remote server will this data be aggregated, or so Mozilla says.
The argument that “It is just a new, additional means of tracking users” also doesn’t really make sense - even if we assume that this is new means of tracking.
It is a new means of tracking. It is extra telemetry provided by Mozilla to advertisement partners.
it doesn’t make a difference.
It makes a difference because Mozilla went out of its way to inject this tracking into a browser that is supposedly made for users.
It does not escape me, by the way, that Mozilla is now a de jure advertising corporation: since FakeSpot they’ve sold private data to third party advertisers, and since Anonym they’ve operated an advertising-specific wing.
Because of this this, Mozilla can no longer make any statements about online advertising without a huge conflict of interest, which they should disclose.
Except Mozilla enables the telemetry by default and does not ship an ad blocker.
Also the data is not anonymized until after upload. You must trust Mozilla to do this. And I don’t know how much I trust Mozilla after they refused to announce this change to its users.
I think a big part of the problem is that they didn’t show anyone a notification or an onboarding dialog or whatever about this feature, when it got introduced.
Right. Not only didn’t they notify anybody, but they took to Reddit to defend the decision not to notify anybody:
we consider modal consent dialogs to be a user-hostile distraction from better defaults, and do not believe such an experience would have been an improvement here.
Which is strange, because Mozilla has no problem with popups in general.
Is it just me, or is it strange that everything is conventional pixel art (all perfectly level squares at right angles) except for the M?
Found online:
With introduction of even more AI services to Firefox I wanted to express that to me it does seem like Firefox is missing with development of features. This sentiment is echoed by a lot of people in my social bubble of technologists, ethicists and other people with same priorities as what one could think are values which Firefox was built on.
Those concerns are in my opinion very valid. The machine learning models have shown to be unreliable - just some of the recent examples from AI products made by large corporations: pointing users to eat pizza with glue, providing false information about just about anything. There are a number of ethical issues yet to be resolved with usage of AI, from its intense usage of computing resources that adds up to electric grid demands. Through privacy and possible copyright violations of datasets that power the models. To an entire bag of other issues monitored by excellent resources such as AIAAIC repository.
AI/Machine learning is an amazing field with many likely applications, and yet, its recent rise to fame is characterized by failures and issues in many implementations. Personally I often don’t even see whether the application of AI is truthfully necessary, in many cases a human would do a more trustworthy and fast task of information gathering than a large machine learning model.
When we compare current state of “AI”, does it reflect what Firefox stands for? Does it reflect Mozilla’s principals?
Let’s compare.
The internet is a global public resource that must remain open and accessible.
LLMs are known for being a black box. Depending on our definition of “open and accessible”, LLMs can be a very free resource or a completely inaccessible black box of math.
Individuals’ security and privacy on the internet are fundamental and must not be treated as optional.
It’s clear that LLMs pose a privacy risk to Internet users. LLMs pose risk in at least two ways - because the data they are trained on sometimes contains private information due to negligent training process. In this case users of a learned model can possibly access private information. The second risk is of course usage of 3rd party services that may use information to infringe on privacy of users. While Mozilla in blog assures that “we are committed to following the principles of user choice, agency, and privacy as we bring AI-powered enhancements to Firefox”, it’s unclear how supporting such services as “ChatGPT, Google Gemini, HuggingChat, and Le Chat Mistral” helps protect Firefox user privacy. Giving users choice should not compromise their safety and privacy.
Magnifying the public benefit aspects of the internet is an important goal, worthy of time, attention and commitment.
In my opinion in the process of designing AI functionalities on top of Firefox there was no evaluation on how those functionalities can benefit the public. There are a number of issues as mentioned above with the LLMs, they can be dangerous and work in detriment to users. Investing and supporting in technology of this type may lead to terrible consequences with little actual benefit.
In addition Mozilla claims:
I argue that AI models are the opposite to that. AI output is not verifiable. They are working against sharing knowledge by making seemingly accurate information that turn out to be false.
I ask Mozilla to reevaluate impact of AI considering all of those points. I ask on behalf of myself as well as many users that I see on Fediverse being greatly worried and frustrated with AI changes added on top of Firefox. There is certainly a lot of potential greatness that could be done with AI, but those steps must be taken responsibly.