Over half of all tech industry workers view AI as overrated::undefined
Best assessment I’ve heard: Current AI is an aggressive autocomplete.
I’ve found that relying on it is a mistake anyhow, the amount of incorrect information I’ve seen from chatgpt has been crazy. It’s not a bad thing to get started with it but it’s like reading a grade school kids homework, you need to proofread the heck out of it.
I feel like the AI in self-driving cars is the same way. They’re like driving with a 15 year old that just got their learners permit.
Turns out that getting a computer to do 80% of a good job isn’t so great. It’s that extra 20% that makes all the difference.
That 80% also doesn’t take that much effort. Automation can still be helpful depending on how much effort it is to repeatedly do it, but that 20% is really where we need to see progress for a massive innovation to happen.
I actually disagree. Ai is great at doing the parts that are easy to do mentally but still take time to do. This “fancy autocomplete” is where it shines and can accelerate the work of a professional by an order of magnitude
I have found that it’s like having a junior programmer assistant. It’s great for “write me python code for opening an in file from a command line argument, reading the contents into a key/value dict array, then closing the file.” It’s terrible for “write me a python code for pulling data into a redis database.”
I find it’s wrong 50% of the time for certain command line switches, Linux file structure, and aws cli.
I find it’s terrible for advanced stuff like, “using aws cli and jq, take all volumes in a vpc, and display the volume id, volume size in gb, instance id it’s attached to, private IP address of the instance, whether is a gp3 or gp2, and the vpc id in a comma separated format, sorted by volume size.”
Even worse at, “take all my gp2 volumes and make them gp3.”
I recently used it to update my resume with great success. But I also didn’t just blindly trust it.
Gave it my resume and then asked it to edit my resume to more closely align with a guide I found on Harvards website. Gave it the guide as well and it spit out a version of mine that much more closely resembled the provided guide.
Spent roughly 5 minutes editing the new version to correct for any problems it had and boom. Half an hour of worked parsed down to sub 10
I then had it use my new resume (I gave it a copy of the edited version) and asked it to write me a cover letter for a job (I provided the job description)
Boom. Cover letter. I spent about 10 minutes editing that piece. And then that new resume and cover letter lead to an interview and subsequent job offer.
AI is a tool not an all in one solution.
I just reviewed a PR today and the code was… bad, like unusually bad for mycoworkers and left some comments.
Then my coworker said he used chatgpt without really thinking on what he was copypasting.
Nice one! I have heard it called a fuzzy JPG of the internet.
And that’s entirely correct
No. It’s not and hasn’t been for at least a year. Maybe the ai your dealing with is, but it’s shown understanding of concepts in ways that make no sense for how it was created. Gotta go.
it’s shown understanding of concepts
No it hasn’t.
It does a shockingly good analogue of “understanding” at the very least. Have you tried asking chatgpt to solve analogies? Those show up in all kinds of intelligence tests.
We don’t have agi, definitely, but this stuff has come a very long way and it’s quite close to being genuinely useful.
Even if we completely reject the “it’s ai,” we more or less have a natural language interface for computers that isn’t a shallow trick and that’s awesome.
Well here’s the question. Is it solving them, or just regurgitating the answer? If it solves them it should be able to accurately solve completely novel analogies.
Novel analogies. Very easy to prove this independently for yourself.
yes, it’s has. the most famous example is the stacking of the laptop and the markers. you may not have access but it’s about to eclipse us imho. I’m no technological fanboy either. 20 years ago I argued that I wouldn’t be possible to understand human speech. now that is a everyday occurrence.
Maybe if you Interpret it’s output as such.
Too bad it’s bullshit.
If you are actually interested in the topic, here’s a few good reads:
-
Do Large Language Models learn world models or just surface statistics? (Jan 2023)
-
Actually, Othello-GPT Has A Linear Emergent World Representation (Mar 2023)
-
Eight Things to Know about Large Language Models (April 2023)
-
Playing chess with large language models (Aug 2023)
-
Language Models Represent Space and Time (Oct 2023)
As you can see, the past year has shed a lot of light on the topic.
One of my favorite facts is that it takes on average 17 years before discoveries in research find their way to the average practitioner in the medical field. While tech as a discipline may be more quick to update itself, it’s still not sub-12 months, and as a result a lot of people are continuing to confidently parrot things that have recently been shown in research circles to be BS.
-
Over half of tech industry workers have seen the “great demo -> overhyped bullshit” cycle before.
You just have to leverage the agile AI blockchain cloud.
Once we’re able to synergize the increased throughput of our knowledge capacity we’re likely to exceed shareholder expectation and increase returns company wide so employee defecation won’t be throttled by our ability to process sanity.
Don’t forget to make it connected to every device, ever
AIot?
Every billboard in SF is just these words shuffled
No SQL, block chain, crypto, metaverse, just to name a few recent examples.
AI is overhyped, but it is, so far, more useful than any of those other examples, though.
These are useful technologies if used when called for. They aren’t all in one solutions like the smart phone killing off cameras, pdas, media players… I think if people looked at them as tools which fix specific problems, we’d all be happier.
Every year sometimes.
Largely because we understand that what they’re calling “AI” isn’t AI.
AI doesn’t necessarily mean human-level intelligence, if that’s what you mean. The AI field has wrestled with this for decades. There can be “strong AI”, which is aiming for that human-level intelligence, but that’s probably a far off goal. The “weak AI” is about pushing the boundaries of what computers can do, and that stuff has been massively useful even before we talk about the more modern stuff.
Sounds like people here are expecting to see GPAI and singularity stuff, but all they see is a pitiful LLM or other even more narrow AI applications. Remember, even optical character recognition (OCR) used to be called AI until it became so common that it wasn’t exciting any more. What AI developers call AI today, is just basic automation and few decades later.
Yup. LLM RAG is just search 2.0 with a GPU.
For certain use cases it’s incredible, but those use cases shouldn’t be your first idea for a pipeline
Given that AI isn’t purported to be AGI, how do you define AI such that multimodal transformers capable of developing abstract world models as linear representations and trained on unthinkable amounts of human content mirroring a wide array of capabilities which lead to the ability to do things thought to be impossible as recently as three years ago (such as explain jokes not in the training set or solve riddles not in the training set) isn’t “artificial intelligence”?
THANK YOU! I’ve been saying this a long time, but have just kind of accepted that the definition of AI is no longer what it was.
I think it will be the next big thing in tech (or “disruptor” if you must buzzword). But I agree it’s being way over-hyped for where it is right now.
Clueless executives barely know what it is, they just know they want it get ahead of it in order to remain competitive. Marketing types reporting to those executives oversell it (because that’s their job).
One of my friends is an overpaid consultant for a huge corporation, and he says they are trying to force-retro-fit AI to things that barely make any sense…just so that they can say that it’s “powered by AI”.
On the other hand, AI is much better at some tasks than humans. That AI skill set is going to grow over time. And the accumulation of those skills will accelerate. I think we’ve all been distracted, entertained, and a little bit frightened by chat-focused and image-focused AIs. However, AI as a concept is broader and deeper than just chat and images. It’s going to do remarkable stuff in medicine, engineering, and design.
Personally, I think medicine will be the most impacted by AI. Medicine has already been increasingly implementing AI in many areas, and as the tech continues to mature, I am optimistic it will have tremendous effect. Already there are many studies confirming AI’s ability to outperform leading experts in early cancer and disease diagnoses. Just think what kind of impact that could have in developing countries once the tech is affordably scalable. Then you factor in how it can greatly speed up treatment research and it’s pretty exciting.
That being said, it’s always wise to remain cautiously skeptical.
The bad part is health insurance companies are also using AI.
Common US healthcare L
It is overrated. At least when they look at AI as some sort of brain crutch that redeems them from learning stuff.
My boss now believes he can “program too” because he let’s ChatGPT write scripts for him that more often than not are poor bs.
He also enters chunks of our code into ChatGPT when we issue bugs or aren’t finished with everything in 5 minutes as some kind of “Gotcha moment”, ignoring that the solutions he then provides don’t work.
Too many people see LLMs as authorities they just aren’t…
It bugs me how easily people (a) trust the accuracy of the output of ChatGPT, (b) feel like it’s somehow safe to use output in commercial applications or to place output under their own license, as if the open issues of copyright aren’t a ten-ton liability hanging over their head, and © feed sensitive data into ChatGPT, as if OpenAI isn’t going to log that interaction and train future models on it.
I have played around a bit, but I simply am not carefree/careless or am too uptight (pick your interpretation) to use it for anything serious.
Too many people see LLMs as authorities they just aren’t…
This is more a ‘human’ problem than an ‘AI’ problem.
In general it’s weird as heck that the industry is full force going into chatbots as a search replacement.
Like, that was a neat demo for a low hanging fruit usecase, but it’s pretty damn far from the ideal production application of it given that the tech isn’t actually memorizing facts and when it gets things right it’s a “wow, this is impressive because it really shouldn’t be doing a good job at this.”
Meanwhile nearly no one is publicly discussing their use as classifiers, which is where the current state of the tech is a slam dunk.
Overall, the past few years have opened my eyes to just how broken human thinking is, not as much the limitations of neural networks.
It is overrated. It has a few uses, but it’s not a generalized AI. It’s like calling a basic calculator a computer. Sure it is an electronic computing device and makes a big difference in calculating speed for doing finances or retail cashiers or whatever. But it’s not a generalized computing system that can basically compute anything that it’s given instructions for which is what we think of when we hear something is a “computer”. It can only do basic math. It could never be used to display a photo , much less make a complex video game.
Similarly the current thing that’s called “AI”, can learn in a very narrow subject that it is designed for. It can’t learn just anything. It can’t make inferences beyond the training material or understand. It can’t create anything totally new, it just remixes things. It could never actually create a new genre of games with some kind of new interface that has never been thought of, or discover the exact mechanisms of how gravity works, since those things aren’t in its training material since they don’t yet exist.
Some calculators can run DooM, though
Lol, those are different. I meant like a little solar powered addition, subtraction, multiplication, division and that’s it kind of calculator.
Many areas of machine learning, particularly LLMs are making impressive progress but the usual ycombinator techbro types are over hyping things again. Same as every other bubble including the original Internet one and the crypto scams and half the bullshit companies they run that add fuck all value to the world.
The cult of bullshit around AI is a means to fleece investors. Seen the same bullshit too many times. Machine learning is going to have a huge impact on the world, same as the Internet did, but it isn’t going to happen overnight. The only certain thing that will happen in the short term is that wealth will be transferred from our pockets to theirs. Fuck them all.
I skip most AI/ChatGPT spam in social media with the same ruthlessness I skipped NFTs. It isn’t that ML doesn’t have huge potential but most publicity about it is clearly aimed at pumping up the market rather than being truly informative about the technology.
ML has already had a huge impact on the world (for better or worse), to the extent that Yann LeCun proposes that the tech giants would crumble if it disappeared overnight. For several years it’s been the core of speech-to-text, language translation, optical character recognition, web search, content recommendation, social media hate speech detection, to name a few.
ML based handwriting recognition has been powering postal routing for a couple of decades. ML completely dominates some areas and will only increase in impact as it becomes more widely applicable. Getting any technology from a lab demo to a safe and reliable real world product is difficult and only more so when there are regulatory obstacles and people being dragged around by vehicles.
For the purposes of raising money from investors it is convenient to understate problems and generate a cult of magical thinking about technology. The hype cycle and the manipulation of the narrative has been fairly obvious with this one.
I remember when it first came out I asked it to help me write a MapperConfig custom strategy and the answer it gave me was so fantastically wrong - even with prompting - that I lost an afternoon. Honestly the only useful thing I’ve found for it is getting it to find potential syntax errors in terraform code that the plan might miss. It doesn’t even complement my programming skills like a traditional search engine can do; instead it assumes a solution that is usually wrong and you are left to try to build your house on the boilercode sand it spits out at you.
It’s a general problem with ChatGPT(free), the more obscure the topic, the more useless the answers will be. It works pretty good for Wikipedia-style general knowledge, but everything that goes even a little deeper is a mess. This is true even when it comes to things that shouldn’t be that obscure, e.g. pop-culture things like movies. It can give you a summary of StarWars, but anything even a little more outside the mainstream it makes up on the spot.
How much better is ChatGPT-Pro when it comes to this? Can it answer /r/tipofmytongue/ style question?
I’ve found the free one can sometimes answer tip of my tongue questions but yeah anything even remotely obscure it will just lie and say that doesn’t exist, especially if you stray a little too close to the puritanical guard rails. One time I was going down a rabbit hole researching human sex organ variations and it flat out told me the people in South America who grow a penis at 12 don’t exist until I found the name guevedoces on my own, and wouldn’t you know it then it knew what I was talking about.
Have you used copilot? I find it to be fantastically useful.
I also have tried to use it to help with programming problems, and it is confidently incorrect a high percentage (50%) of the time. It will fabricate package names, functions, and more. When you ask it to correct itself, it will give another confidently incorrect answer. Do this a few more times and you could end up with it suggesting the first incorrect answer it gave you and then you realize it is literally leading you in circles.
It’s definitely a nice option to check something quickly, and it has given me some good information, but you really can’t blindly trust its output.
At least with programming, you can validate fairly quickly that it is giving bad information. With other real-life applications, using it for cooking/baking, or trip planning, the consequences of bad information could be quite a bit worse.
I have a doctorate in computer engineering, and yeah it’s overhyped to the moon.
I’m oversimplifying it and some one will ackchyually me but once you understand the core mechanics the magic is somewhat diminished. It’s linear algebra and matrices all the way down.
We got really good at parallelizing matrix operations and storing large matrices and the end result is essentially “AI”.
Big emphasis on the ‘A’
That’s because it is overrated and the people in the tech industry are actually qualified to make that determination. It’s a glorified assistant, nothing more. we’ve had these for years, they’re just getting a little bit better. it’s not gonna replace a network stack admin or a programmer anytime soon.
I work in AI, and I think AI is overrated.
It is currently overhyped and so much of it just seems to be copying the same 3 generative AI tools into as many places as possible. This won’t work out because it is expensive to run the AI models. I can’t believe nobody talks about this cost.
Where AI shines is when something new is done with it, or there is a significant improvement in some way to an existing model (more powerful or runs on lower end chips, for example).
Reality: most tech workers view it as fairly rated or slightly overrated according to the real data: https://www.techspot.com/images2/news/bigimage/2023/11/2023-11-20-image-3.png
Which is fair. AI at work is great but it only does fairly simple things. Nothing i can’t do myself but saves my sanity and time.
It’s all i want from it and it delivers.
Helps me write hacky scripts to solve one off problems. Honestly, it saves me a few work days.
But it’s far from replacing anybody.
Slightly overrated is where I would put it, absolutely. It’s overhyped, but god if the recent advancements aren’t impressive.
Of course, because hype didn’t come from tech people, but content writers, designers, PR people, etc. who all thought they didn’t need tech people anymore. The moment ChatGPT started being popular I started getting debugging requests from few designers. They went there and asked it to write a plugin or a script they needed. Only problem was it didn’t really work like it should. Debugging that code was a nightmare.
I’ve seen few clever uses. Couple of our clients made a “chat bot” whose reference was their poorly written documentation. So you’d ask a bot something technical related to that documentation and it would decipher the mess. I still claim making a better documentation was a smarter move, but what do I know.
I work in tech. AI is overrated.