There's also the environmental impact, the ethical question of the training data and the fact that it hallucinates.
AI search results teach people to just accept the answer it spits out, despite the fact that it's far less accurate and can always be partially or entirely wrong.
So now you know: at least 90% of your users hate this shit. You built DDG and folks flocked to it because you promise something different than what big tech offers. But DDG does not default to "give all of my data to big tech" and allow users to opt out. You already know their opinion on the matter because they are choosing to come to DDG every day instead of one of your competitors.
So why do this at all? Why alienate your core audience? Why spend a single dev cycle on this or spin up a single server resource to keep it running? The damage is already done when you spend time and money building AI features, and you've already taken an ethical stance on the matter by integrating it into your product. Offering the ability to opt out doesn't absolve DDG or its users in any way.
You have the opportunity to be the only one to outright reject slop before the bubble bursts. Offering a product that fully embraces what the public wants to see in the future is the path forward, not whatever this campaign is. Nothing sells me harder on a product right now than "there's no AI in it and there never will be".
This publicity stunt is a mockery of democracy. Even if the 90% of respondents here were somehow able to carve out lives free of LLM-based features in the software we use, our personal information is still out there in the hands of organizations who face no accountability for their willful incompetence. They force their engineers to crunch out as much AI code slop as possible, often foregoing the most basic security practices.
They collect all of our works and our likenesses without our consent and grind them into bullshit. They build these new “products” with the same incentives they always have: to captivate us and keep us engaged as much as possible, dragging the most vulnerable among us into isolated, self-harming spirals.
I have no interest in this hole in the sand that DuckDuckGo has invited me to stick my head in.
Exactly. It’s about how do we get the benefits of AI without the harmful privacy-cost? It’s what I do making mosslet (https://mosslet.com) a privacy-first social network and journal. We provide AI features in a privacy-first way and I’m constantly looking for ways to improve it. Since there are real benefits to some AI-things, we need to consider how we go about it in a good, privacy-first way. We could apply this same concern to the environmental and human impact of these AI systems. There are ways to do AI while putting people, privacy, and the environment first. I feel the reason we even have a struggle with this is because the economic reasoning that has created crisis after crisis in our world is the same economic reasoning driving the current AI landscape (and the companies behind it).
This (https://danielmiessler.com/blog/keep-the-robots-out-of-the-gym) is roughly my current approach to AI: keep the robots out of the gym, but otherwise use AI wherever I can imagine it would be useful.
AI assisting with protein folding in antivirals and vaccines testing, trained exclusively on strictly specific scientific data: hell yeah.
Commercial generic AI trained on copyrighted data shoved down our throats in every single app: hell no.
OpenAI now using Grokipedia: 🤮
The question is not Yes or No to AI where I'm left with a devil of a time if I choose No ...
The question is Yes or No to Privacy where I sell my soul to the devil if I choose No!
Today, AI is Search 3.0 with 3 new features that make it more than a query and just like a friend!
1. Instead of unwieldy lists of links, I get a focused response from a seemingly all-knowing expert.
2. Furthermore, I have control over the nature of responses, short or long, profound or fun.
3. The result is an alluring dialog with the likes of Socrates, Shakespeare or perhaps even Sagan.
So if given a choice, who would not say YES to AI once they get comfortable with it.
There must be a catch ... it's the Privacy, stupid!
Data, my data, my soul, has become the gold rush for GOOG, GROK, META, MSFT and OpenAI!
1. I may think I can tell AI everything since it won't judge or tell but I'd be grossly mistaken.
2. In fact, every word I type is being pumped into datacenters taking over our landscape.
3. And without Privacy, my IP, my digital fingerprint is attached so they know who I am.
Is there a sure fire way for me and others like me that use AI to save our souls from this devil?
Say YES to DDG (https://www.duckduckgo.com) a company of integrity committed to Privacy!
Privacy is just one of the catches.
There's also the environmental impact, the ethical question of the training data and the fact that it hallucinates.
AI search results teach people to just accept the answer it spits out, despite the fact that it's far less accurate and can always be partially or entirely wrong.
And *then* there's the concern for privacy.
Agree 100%!
1. Data center proliferation is a concern (H20/energy) especially for those targeted
2. Training data ethics: Data bias, sources, privacy, etc. and sweatshops to mitigate
3. Hallucinations: Data, especially LLMs, is never clean so sweatshops will continue
So now you know: at least 90% of your users hate this shit. You built DDG and folks flocked to it because you promise something different than what big tech offers. But DDG does not default to "give all of my data to big tech" and allow users to opt out. You already know their opinion on the matter because they are choosing to come to DDG every day instead of one of your competitors.
So why do this at all? Why alienate your core audience? Why spend a single dev cycle on this or spin up a single server resource to keep it running? The damage is already done when you spend time and money building AI features, and you've already taken an ethical stance on the matter by integrating it into your product. Offering the ability to opt out doesn't absolve DDG or its users in any way.
You have the opportunity to be the only one to outright reject slop before the bubble bursts. Offering a product that fully embraces what the public wants to see in the future is the path forward, not whatever this campaign is. Nothing sells me harder on a product right now than "there's no AI in it and there never will be".
This publicity stunt is a mockery of democracy. Even if the 90% of respondents here were somehow able to carve out lives free of LLM-based features in the software we use, our personal information is still out there in the hands of organizations who face no accountability for their willful incompetence. They force their engineers to crunch out as much AI code slop as possible, often foregoing the most basic security practices.
They collect all of our works and our likenesses without our consent and grind them into bullshit. They build these new “products” with the same incentives they always have: to captivate us and keep us engaged as much as possible, dragging the most vulnerable among us into isolated, self-harming spirals.
I have no interest in this hole in the sand that DuckDuckGo has invited me to stick my head in.
Exactly. It’s about how do we get the benefits of AI without the harmful privacy-cost? It’s what I do making mosslet (https://mosslet.com) a privacy-first social network and journal. We provide AI features in a privacy-first way and I’m constantly looking for ways to improve it. Since there are real benefits to some AI-things, we need to consider how we go about it in a good, privacy-first way. We could apply this same concern to the environmental and human impact of these AI systems. There are ways to do AI while putting people, privacy, and the environment first. I feel the reason we even have a struggle with this is because the economic reasoning that has created crisis after crisis in our world is the same economic reasoning driving the current AI landscape (and the companies behind it).