Or at least require a decent font.
Or at least require a decent font.
Artists, construction workers, administrative clerks, police and video game developers all develop their neural networks in the same way, a method simulated by ANNs.
This is not, “foreign to most artists,” it’s just that most artists have no idea what the mechanism of learning is.
The method by which you provide input to the network for training isn’t the same thing as learning.
Problem is their, “experiment,” is resulting in the return of previously eradicated diseases.
There are valid concerns with regard to bidet use. They do result in aerosolized particulates in greater number than results from wiping, which means you are literally breathing more feces.
Is it enough to be problematic? Probably not, but that may also depend on how aggressively/frequently you use them.
See also:
AI/LLMs can train on whatever they want but when then these LLMs are used for commercial reasons to make money, an argument can be made that the copyrighted material has been used in a money making endeavour.
And does this apply equally to all artists who have seen any of my work? Can I start charging all artists born after 1990, for training their neural networks on my work?
Learning is not and has never been considered a financial transaction.
As someone who has worked extensively with the homeless, I’ve seen quite a few examples of where supposedly anti-homeless takes have been attempts to inject more nuance into discussions than simply being pro- or anti-homeless, both of which are practically meaningless positions.
Looking over their concerns, I’m not sure that they have a leg to stand on. The claim they’re making is that they’ve measured an increase in hate-related tweets (I’ll take them at their word on this) and then they associate this with Musk taking over.
They present no evidence for this later claim and do not, as far as I can see, make any attempt to compare against increases in hate among other social media platforms.
Grooming, for example, is one topic they covered. But this is a topic that Republicans have been pushing increasingly as election season spins up. Musk didn’t cause that, and that kind of nonsense can be found on Facebook and reddit as well.
I’m inclined to sympathize with an underdog nonprofit, but in this case I just can’t see why they expected not to get pushback on such poorly grounded claims
I know it can be hard to have your ideas quedtioned, but at least try to be civil. I never questioned your intentions, yet youre acting like im crazy.
I think that’s all you. I have never suggested that you are crazy. I suggested that calling Microsoft software “safe” as opposed to Linux which is, “insecure,” sounds like trolling. But that’s because it sounds like trolling. No crazy stated or implied.
A walled garden is obviously more secure than an open source project because nobody can even see the code to find vulnerabilities in it.
You should learn more about the world of software. Seriously. Security experts have been reasonably unanimous in their support of the “Many Eyes Make All Bugs Shallow” approach to software security for decades, even while they have criticized it as a mantra that ignores the flaws in a presumption of open source software security.
But just to put it in a simple logically sealed box: Microsoft’s source code has been leaked several times, and of course, bad actors probably have gained access to it throughout the years without such public knowledge. This means that the fundamental difference between Microsoft’s proprietary codebase and open source codebases is not, cannot be the availability of source code. Rather, it is the ability for independent groups to review the code on an ongoing basis.
When the only difference is independent review, the only possible result is higher security.
I understand that you like horses. You ride one every day, and you might have evwn named your horse. The fact is that its time to buy a car.
None of this constitutes a logical refutation to the examples I provided, which are critical components of modern software development and deployment.
Source: I’m a professional software release engineer who has worked with many of the world’s largest corporations.
Quality software costs money
For starters, this is unfounded cargo culting. There is no evidence for this at all. I can point to dozens of very expensive piles of crufty old software that no one should ever go near, and also to some free software that is literally foundational to the modern software world.
Money has nothing to do with the quality of software, but you’re also mistaken if you think open source software is free. You can pay IBM millions of dollars for a suite of enterprise-ready open source software. Most of the cost in such software is rarely the software itself. It’s services, support, training and customization.
Throwing rocks is also simpler than firing a gun, yet modern militaries arent training slingers anymore
But they are succeeding wildly by using largely open source software running on open hardware for drones, networking, battlefield analysis, logistics, etc.
The best example of this is Linux.
Ouch… so, you might want to learn more about technology before commenting in a Technology community…
why does a modern operating system require you to use a terminal
Because a terminal is one of the most powerful modes of interaction ever invented. It can serve as a relatively low-tech UI, but it is also simple enough to be used as a machine interface. It is lightweight, works even when other protocols and interfaces are thwarted by infrastructure issues, because it is simple text, but also meant to be read by a human, it can make for a great interface for logging, you don’t have to guess at which obscure standard (if any) to use to talk to it, compliance with relevant standards is baked into nearly every language ever written, etc.
Try building a system like Kubernetes on graphical UIs… I dare you.
Its THE example of ancient software being pushed on to niave techies
What industry are you working in?! AWS is nearly all Linux. Google Cloud is nearly all Linux. Android is Linux. Hell, even Microsoft finally relented and is now strongly supporting their Windows Subsystem for Linux (WSL) because it’s necessary for supporting modern cloud applications.
that would rather have an insecure open source project than a safe, walled garden like Microsoft Windows 11.
Okay, this has to be a troll… right? This is a troll? Please tell me you can’t be serious.
I wouldn’t say obsolete because that implies it’s not really used anymore.
I’m not sure where you heard someone use the word “obsolete” that way, but I assure you that there are thousands if not millions of examples of obsolete technologies in constant and everyday use.
In an effort to have a smooth and quick transition to this new infrastructure, we will migrate chat messages sent from January 1, 2023 onward. This change will be effective starting June 30th.
It really seems like everything reddit is doing is rushed and always chooses to harm the users as a default. It’s as if they’re actively sabotaging their own platform.
Yeah, this is important. Make it a really big number too so that I have to change my password lots of times in a row in order to put it back to what it was. ;)
It MIGHT not be as bad as you think. If the UI was just terrible at communicating and what it actually meant was, “that password is in our database of known compromised passwords,” then that would be reasonable. Google does this now too, but I think they only do it after the fact (e.g. you get a warning that your password is in a database of compromised passwords).
Fun fact: password controls like this have been obsolete since 2020. Standards that guide password management now focus on password length and external security features (like 2FA and robust password encryption for storage) rather than on individual characters in passwords.
They cannot be anything other than stochastic parrots because that is all the technology allows them to be.
Are you referring to humans or AI? I’m not sure you’re wrong about humans…
Clearly the Founding Fathers were not advanced enough to have crafted the US Constitution unaided.
In a sense you are correct. They cribbed from lots of the most well known political philosophers at the time. For example, there are direct quotes from Locke in the Declaration and his influence over the Constitution can be felt clearly.
What you are describing is true of older LLMs. GPT4, it’s less true of. GPT5 or whatever it is they are training now will likely begin to shed these issues.
The shocking thing that we discovered that lead to all of this is that this sort of LLM continues to scale in capabilities with the quality and size of the training set. AI researchers were convinced that this was not possible until GPT proved that it was.
So the idea that you can look at the limitations of the current generation of LLM and make blanket statements about the limitations of all future generations is demonstrably flawed.
Note that though AI is the new hotness and grabs headlines, this a) doesn’t actually apply only to AI and b) has been done for at least a decade.
Many actors have refused such clauses (I know Sam Jackson is one of them) but many have not.
Putting actor’s faces on CGI bodies has been something Hollywood has been working on for a long time, and AI is just a tool that improves on what we’ve been doing for a while.
This is only partially true. In the US (which tends to set the tone on copyright, but other jurisdictions will weigh in over time) generative AI cannot be considered an “author.” That doesn’t mean that other forms of rights don’t apply to AI generated works (for example, AI generated works may be treated as trade secrets and probably will be accepted for trademark purposes).
Also, all of the usual transformations which can take work from the public domain and result in a new copyrightable derivative also apply.
This is a much more complex issue than just, “AI bots never had rights to waive.”