Ghibli images are not "cows", they're /an artists style/, and a particular shop that has expressly asked that you *not copy their work*, because it cheapens what humans do.
Maybe you already don't find cows beautiful and so didn't appreciate the metaphor. Here's another take: Driving the road to Hana on Maui, I think you drive by like 50 waterfalls. We were in awe for the first dozen, but by the 50th, it was just another waterfall. Or seeing nonstop bald eagles in Alaska, by the time you leave, they're like pigeons.
The point being made exactly that something beautiful has being cheapened.
The article is defining cows as something we see too much of. Copying Ghiblis work turns the images into cows, regardless of how the artist feels about it. Obviously it would be ideal if that wasn't happening.
Do people really try to one-shot their AI tasks? I have just started using AI to code, and I found the process very similar to regular coding… you give a detailed task, then you iterate by finding specific issues and giving the AI detailed instructions on how to fix the issues.
It works great, but I can’t imagine skipping the refinement process.
> Do people really try to one-shot their AI tasks?
Yes. I almost always end with "Do not generate any code unless it can help in our discussions as this is the design stage" I would say, 95% of my code for https://github.com/gitsense/chat in the last 6 months were AI generated, and I would say 80% were one shots.
It is important to note that I can easily get into the 30+ messages of back and forth before any code is generated. For complex tasks, I will literally spend an hour or two (that can span days) chatting and thinking about a problem with the LLM and I do expect the LLM to one shot them.
Not even remotely since the 5% that I need to write is usually quite complex. I do think my writing proficiency will decrease though. However my debugging and problem solving skills should increase.
Having said all of that, I do believe AI will have a very negative affect on developers where the challenge is skill and not time. AI is implementing things that I can do if given enough time. I am literraly implementing things in months that would have taken me a year or more.
My AI search is nontrivial but it only took two months to write. I should also note the 5% that I needed to implement was the difference between throw away code and a usuable search engine.
>Not even remotely since the 5% that I need to write is usually quite complex.
Not sure I believe this. If you suddenly automate away 95% of any task, how could it be the case you retain 100% of your prior abilities?
>However my debugging and problem solving skills should increase
By "my", I assume you mean "my LLM"?
>I do think my writing proficiency will decrease though.
This alone is cause for concern. The ability for a human being to communicate without assistance is extremely important in an age where AI is outputting a significant fraction of all new content.
> Not sure I believe this. If you suddenly automate away 95% of any task, how could it be the case you retain 100% of your prior abilities?
I need to review like crazy now, so it is not like I am handing off my understanding of the problem. If anything, I learn new things from time to time, as the LLM will generate code in ways that I haven't thought of before.
The AI genie is out of the bottle now and I do believe in a year or two, companies are going to start asking for conversations along with the LLM generated code, which is how I guess you can determine if people are losing their skill. When my code is fully published, I will include conversations for every feature/bug fix that is introduced.
> The ability for a human being to communicate without assistance is extremely important
I agree with this, but once again, it isn't like I don't have to review everything. When LLMs get much better, I think my writing skills may decline, but as it currently stands, I do find myself having to revised what the LLM writes to make it sound more natural.
Everything is speculation at this point, but I am sure I will lose some skills but I also think will gain new ones by being exposed to something that I haven't thought of before.
I wrote my chat app because I needed a more comfortable way to read and write *long* messages. For the foreseeable future, I don't see my writing proficiency to decrease in any significant manner. I can see myself being slower to write in the future though, as I find myself being very comfortable speaking to the LLM in a manner that I would not to a human. LLMs are extremely good at inferring context, so I do a lot lazy typing now to speed things up, which may turn into a bad habit.
Every tool I've tinkered with that hints at one-shotting (or one-shot and then refine) ends up with a messy app that might be 60-70% of what you're looking for but since the foundation is not solid, you're never going to get the extra 30-40% of your initial prompt, let the multiples of work needed to bolt of future functionality.
Compare that to the approach you're using (which is what I'm also doing), and you're able have have AI stay much closer to what you're looking for, be less prone to damaging hallucinations, and also guide it to a foundation that's stable. The downside is that it's a lot more work. You might multiply your productivity by some single digit.
To me, that 2nd approach is much more reasonable than trying to 100x your productivity but actually end up getting less done because you end up stuck in a rabbit hole you don't know you're in and you'll never refine your way out of it.
I got stuck in that rabbit hole you mention. Ended up ditching AI and just picked up a no/low-code web app builder cause I don’t handle large project contexts in my own head well enough to chunk design into tasks that AI can handle. But the builder I use can separate the backend from the front end which allows for a custom front end template source code to be consumed by an ai agent if you want. I’m hoping I can manage this context better but I still have to design and deploy a module to consume user submitted photos and process with an ai model for instant quote generation
Allan Schnaiberg's concept of the treadmill of production where actors are perpetually driven to accumulate capital and expand the market in an effort to maintain relative economic and social position.
Interesting that radical abundance may create radical competition to utilize more abundant materials in an effort to maintain relative economic and social position.
If we give runners motorcycles, they reach finish lines faster. But the motor sport is still competitive and takes effort; everyone else has a bike, too. And since the bike parameters are tightly controlled (basically everyone is on the same bike), the competition is intense.
Analogy holds because its way more expensive, stressful, and the stakes are higher. Also it's harder to get in to without already having an advantage (like rich parents).
My prediction is that the next differentiator will be response time.
First we got transparent UIs, now everyone has them. Then we got custom icons, then Font Awesome commoditized them. Then flat UI until everyone copied it. Then those weird hand-painted Lottie illustrations, and now thanks to Gen-AI everyone has them. (Then Apple launched their 2nd gen transparent UI.)
But the one thing that neither caffeinated undergrads nor LLMs can pull off is making software efficient. That's why software that responds quickly to user input will feel magical and stand out in a sea of slow and bloated AI slop.
"Then, within twenty minutes, we started ignoring the cows. … Cows, after you’ve seen them for a while, are boring"
Skill issue. I've been looking at cows for 40 years and am still enchanted by them. Maybe it helps that I think of cows as animals instead of story book illustrations; you'd get lynched if you claimed you got bored of your pet cat after 20 minutes.
> New technologies give us greater leverage to do more tasks better. But because this leverage is usually introduced into competitive environments, the result is that we end up having to work just as hard as before (if not harder) to remain competitive and keep up with the joneses.
To win big financially you have to be able to use AI better than others. Even if you use it merely as well as the next person, your productivity has increased, reducing costs, which is a good thing. The bad news for some is that they are not enjoying the parts of the work left over from automation.
I did not speak of "exponential" returns, but it is now feasible for one person to compete with a team, or a small team with a big one, due to co-ordination costs and the difficulty of assembling the right people.
What?? That isn't a complete idea. It has always been possible for a small team to compete with a big one.
As someone on a very small team competing with a very big one I don't have time for anything that can't bring exponential returns. I have no time for LLMs.
An investment in a person has superlinear returns: with time the human student becomes the teacher. Each person you teach might teach two more people, with the overall trend following exponential growth deriving from the value of the initial investment in a single person.
LLMs promise to speed you up right now in direct proportion to the amount you pay for tokens while sacrificing your own growth potential. You'd have to be a cynic to do it -- you'd have to believe that your own ideas aren't even worth investing in over the long term
Returns for who, the company or the student? Juniors are often a net negative for companies. Some stay that way because they just won't learn. You would get further by hiring seniors and learning from them.
> Generative AI gives us incredible first drafts to work with, but few people want to put in the additional effort it requires to make work that people love
and
> So make your stuff stand out. It doesn't have to be "better." It just has to be different.
I wouldn't say everything that gets hugely popular has a ton of craft behind it, to me craft is about skill, but a badly drawn webcomic (random example) can still be very popular if it has something other point of difference.
With my current project (a game project), I full-vibed as hard as I could to test out the concept, as well as get some of the data files in place and write a tool for managing the data. This went great, and I have made technology choices for AI-coding and have gained enough skill with AI-coding that I can get prettttty far this way. But it does produce a ball-of-mud pattern and a lot of cruft that will cause it to hit a brick wall.
Then I copied the tool and data to a new directory and fully started over, with a more concrete description of the product I wanted in place and a better view of what components I would want, and began with a plan to implement one small component at a time, each with its own test screen, reviewing every change and not allowing any slop through (including any features that look fine from a code standpoint but are not needed for the product).
Where does the product description sit in your project so the ai can reference it? Is it like a summary form that describes what the project basically should do or be used for, asking for a friend
For take #1 I said what tech to use and a high level description of the game and it's features. I guess I failed to mention this part, but when I threw take #1 away, I first used Claude + hand editing to update it to have a detailed description of each screen and feature in the game. So take #2 had a much more detailed description of exactly what was going to be built, but still, right in CLAUDE.md
I did also create a DEVELOPMENT-PLAN.md first with Claude and have been having it update it with what's been done before every commit. I don't know yet have a good idea of how impactful that part has been.
It's crazy to me that people will reference Keynes' prediction of leisure without acknowledging that we chose not to do that. The dystopian way in which work has become more competitive, intensive, and ill-compensated even as economies have supposedly continued to become more productive is the result of policy choices, not some inevitable fact of the universe
> Old buildings feel special because they’re rare.
No. When you have a city full of old houses all from the same era, maybe even by the same architect, the new building still looks ugly. The old house looks beautiful, even when you have hundreds copies next to it.
But that isn't what the author is talking about. The issues is, your good code can be equal to slop that works. What the author says needs to happen is, you need to find a better way to stand out. I suspect for many businesses where software superiority is not a core requirement, slop that works will be treated the same as non-slop code.
Until that slop that works leads to therac-26 or PostOfficeScandal2 electric boogaloo. Neither of those applications required software superior to their competitors, just working software
The average quality of software can only trend down so far before real world problems start manifesting, even outside of businesses with a hard requirement on "software superiority"
It's so bizarre to me seeing these comments as a professional software engineer. Like, you do realize that at least 80% of the code written in large companies like Microsoft, Amazon, etc was slop long before AI was ever invented right?
The stuff you get to see in open source, papers, academia- that's a very small curated 1% of the actual glue code written by an overworked engineer at 1am that holds literally everything together.
Why is it bizarre? I’m a tester with 38 years in the business. I’ve seen pretty much every kind of project and technology.
I was testing at Microsoft on the week that Windows 2000 shipped, showing them that Photoshop can completely freeze Windows (which is bad, and something they needed to know about).
The creed of a tester begins with faith in existence of trouble. This does not mean we believe anything is perfectible— it means we think it is necessary to be vigilant.
AI commits errors in a way and to a degree that should alarm any reasonable engineer. But more to the point: it tends to alienate engineers from their work so that they are less able to behave responsibly. I think testing is more important than ever, because AI is a Gatling gun of risk.
You are focusing on code. That is the wrong focus. Creating code was never the job. The job was being trustworthy about what I deliver and how.
AI is not worthy of trust, and the sort of reasonable people I want to deal with won’t trust it and don’t. They deal with me because I am not a simulation of someone who cares— I am the real thing. I am a purple cow in terms of personal credibility and responsibility.
To the degree that the application of AI is useful to me without putting my credibility at risk, I will use it. It does have its uses.
(BTW, although I write code as part of my work, I stopped being a full-time coder in my teens. I am tester, testing consultant, expert witness, and trainer, now.)
I've been thinking something similar about any company that has AI do all it's software dev.
Where's your moat? If you can create the software with prompts so can your competitors.
Attackers knowing which model(s) you use could also do similar prompts and check the output code, to speculate what kind of exploits your software might have.
A lawyer knowing what model his opposition uses could speculate on their likely strategies.
The set of commercially successful software that could not be reimplemented by a determined team of caffeinated undergrads was already very small before LLM assistance.
Turns out being able to write the software is not the only, or even the most important factor in success.
> This is the leverage paradox. New technologies give us greater leverage to do more tasks better. But because this leverage is usually introduced into competitive environments, the result is that we end up having to work just as hard as before (if not harder) to remain competitive and keep up with the joneses.
Off-topic, but in biology circles I've heard this type of situation (where "it takes all the running you can do, to keep in the same place" because your competitors are constantly improving as well) called a "Red Queen's race" and really like the picture that analogy paints.
I feel that I understand the leverage paradox concept, and the induced demand concept, but I don't understand how they are the same concept. Can you explain the connection a little more?
More leverage = more productivity = more supply of good and services
The induced remand for more goods and services therefore fills the gap, and causes people to work just as hard as before -- similarly to how a highway remains full after adding a lane
Ghibli images are not "cows", they're /an artists style/, and a particular shop that has expressly asked that you *not copy their work*, because it cheapens what humans do.
Maybe you already don't find cows beautiful and so didn't appreciate the metaphor. Here's another take: Driving the road to Hana on Maui, I think you drive by like 50 waterfalls. We were in awe for the first dozen, but by the 50th, it was just another waterfall. Or seeing nonstop bald eagles in Alaska, by the time you leave, they're like pigeons.
The point being made exactly that something beautiful has being cheapened.
why should this argument work, but not the same argument for using a combine harvester which also cheapens the work of a farmer?
The article is defining cows as something we see too much of. Copying Ghiblis work turns the images into cows, regardless of how the artist feels about it. Obviously it would be ideal if that wasn't happening.
Do people really try to one-shot their AI tasks? I have just started using AI to code, and I found the process very similar to regular coding… you give a detailed task, then you iterate by finding specific issues and giving the AI detailed instructions on how to fix the issues.
It works great, but I can’t imagine skipping the refinement process.
> Do people really try to one-shot their AI tasks?
Yes. I almost always end with "Do not generate any code unless it can help in our discussions as this is the design stage" I would say, 95% of my code for https://github.com/gitsense/chat in the last 6 months were AI generated, and I would say 80% were one shots.
It is important to note that I can easily get into the 30+ messages of back and forth before any code is generated. For complex tasks, I will literally spend an hour or two (that can span days) chatting and thinking about a problem with the LLM and I do expect the LLM to one shot them.
Do you feel as if your ability to code is atrophying?
Not even remotely since the 5% that I need to write is usually quite complex. I do think my writing proficiency will decrease though. However my debugging and problem solving skills should increase.
Having said all of that, I do believe AI will have a very negative affect on developers where the challenge is skill and not time. AI is implementing things that I can do if given enough time. I am literraly implementing things in months that would have taken me a year or more.
My AI search is nontrivial but it only took two months to write. I should also note the 5% that I needed to implement was the difference between throw away code and a usuable search engine.
>Not even remotely since the 5% that I need to write is usually quite complex.
Not sure I believe this. If you suddenly automate away 95% of any task, how could it be the case you retain 100% of your prior abilities?
>However my debugging and problem solving skills should increase
By "my", I assume you mean "my LLM"?
>I do think my writing proficiency will decrease though.
This alone is cause for concern. The ability for a human being to communicate without assistance is extremely important in an age where AI is outputting a significant fraction of all new content.
> Not sure I believe this. If you suddenly automate away 95% of any task, how could it be the case you retain 100% of your prior abilities?
I need to review like crazy now, so it is not like I am handing off my understanding of the problem. If anything, I learn new things from time to time, as the LLM will generate code in ways that I haven't thought of before.
The AI genie is out of the bottle now and I do believe in a year or two, companies are going to start asking for conversations along with the LLM generated code, which is how I guess you can determine if people are losing their skill. When my code is fully published, I will include conversations for every feature/bug fix that is introduced.
> The ability for a human being to communicate without assistance is extremely important
I agree with this, but once again, it isn't like I don't have to review everything. When LLMs get much better, I think my writing skills may decline, but as it currently stands, I do find myself having to revised what the LLM writes to make it sound more natural.
Everything is speculation at this point, but I am sure I will lose some skills but I also think will gain new ones by being exposed to something that I haven't thought of before.
I wrote my chat app because I needed a more comfortable way to read and write *long* messages. For the foreseeable future, I don't see my writing proficiency to decrease in any significant manner. I can see myself being slower to write in the future though, as I find myself being very comfortable speaking to the LLM in a manner that I would not to a human. LLMs are extremely good at inferring context, so I do a lot lazy typing now to speed things up, which may turn into a bad habit.
Every tool I've tinkered with that hints at one-shotting (or one-shot and then refine) ends up with a messy app that might be 60-70% of what you're looking for but since the foundation is not solid, you're never going to get the extra 30-40% of your initial prompt, let the multiples of work needed to bolt of future functionality.
Compare that to the approach you're using (which is what I'm also doing), and you're able have have AI stay much closer to what you're looking for, be less prone to damaging hallucinations, and also guide it to a foundation that's stable. The downside is that it's a lot more work. You might multiply your productivity by some single digit.
To me, that 2nd approach is much more reasonable than trying to 100x your productivity but actually end up getting less done because you end up stuck in a rabbit hole you don't know you're in and you'll never refine your way out of it.
I got stuck in that rabbit hole you mention. Ended up ditching AI and just picked up a no/low-code web app builder cause I don’t handle large project contexts in my own head well enough to chunk design into tasks that AI can handle. But the builder I use can separate the backend from the front end which allows for a custom front end template source code to be consumed by an ai agent if you want. I’m hoping I can manage this context better but I still have to design and deploy a module to consume user submitted photos and process with an ai model for instant quote generation
Allan Schnaiberg's concept of the treadmill of production where actors are perpetually driven to accumulate capital and expand the market in an effort to maintain relative economic and social position.
Interesting that radical abundance may create radical competition to utilize more abundant materials in an effort to maintain relative economic and social position.
If we give runners motorcycles, they reach finish lines faster. But the motor sport is still competitive and takes effort; everyone else has a bike, too. And since the bike parameters are tightly controlled (basically everyone is on the same bike), the competition is intense.
The cost of losing the race is losing your home and starving. Very intense.
Analogy holds because its way more expensive, stressful, and the stakes are higher. Also it's harder to get in to without already having an advantage (like rich parents).
My prediction is that the next differentiator will be response time.
First we got transparent UIs, now everyone has them. Then we got custom icons, then Font Awesome commoditized them. Then flat UI until everyone copied it. Then those weird hand-painted Lottie illustrations, and now thanks to Gen-AI everyone has them. (Then Apple launched their 2nd gen transparent UI.)
But the one thing that neither caffeinated undergrads nor LLMs can pull off is making software efficient. That's why software that responds quickly to user input will feel magical and stand out in a sea of slow and bloated AI slop.
> New technologies give us greater leverage to do more tasks better. But because this leverage is usually introduced into competitive environments, the result is that we end up having to work just as hard as before (if not harder) to remain competitive and keep up with the joneses.
More flour more water. More water more flour.
To win big financially you have to be able to use AI better than others. Even if you use it merely as well as the next person, your productivity has increased, reducing costs, which is a good thing. The bad news for some is that they are not enjoying the parts of the work left over from automation.
I don't see how that can be. There is no exponential return on "investing" in using AI real good.
Investing in your understanding and skill, on the other hand, has nearly limitless returns.
I did not speak of "exponential" returns, but it is now feasible for one person to compete with a team, or a small team with a big one, due to co-ordination costs and the difficulty of assembling the right people.
What?? That isn't a complete idea. It has always been possible for a small team to compete with a big one.
As someone on a very small team competing with a very big one I don't have time for anything that can't bring exponential returns. I have no time for LLMs.
What even is an exponential return? You need to be more precise with your terms.
An investment in a person has superlinear returns: with time the human student becomes the teacher. Each person you teach might teach two more people, with the overall trend following exponential growth deriving from the value of the initial investment in a single person.
LLMs promise to speed you up right now in direct proportion to the amount you pay for tokens while sacrificing your own growth potential. You'd have to be a cynic to do it -- you'd have to believe that your own ideas aren't even worth investing in over the long term
Returns for who, the company or the student? Juniors are often a net negative for companies. Some stay that way because they just won't learn. You would get further by hiring seniors and learning from them.
Again that's the cynical view which assumes management but no leadership with any ability or conviction. Sadly that's commonly the reality now.
> Generative AI gives us incredible first drafts to work with, but few people want to put in the additional effort it requires to make work that people love
and
> So make your stuff stand out. It doesn't have to be "better." It just has to be different.
equals... craft?
Isn't that what has always mattered a great deal
I wouldn't say everything that gets hugely popular has a ton of craft behind it, to me craft is about skill, but a badly drawn webcomic (random example) can still be very popular if it has something other point of difference.
Out of curiousity isnt this very similar to Jevon's paradox? Or is JP talking about supply/demand vs this being about competitiveness/skill?
With my current project (a game project), I full-vibed as hard as I could to test out the concept, as well as get some of the data files in place and write a tool for managing the data. This went great, and I have made technology choices for AI-coding and have gained enough skill with AI-coding that I can get prettttty far this way. But it does produce a ball-of-mud pattern and a lot of cruft that will cause it to hit a brick wall.
Then I copied the tool and data to a new directory and fully started over, with a more concrete description of the product I wanted in place and a better view of what components I would want, and began with a plan to implement one small component at a time, each with its own test screen, reviewing every change and not allowing any slop through (including any features that look fine from a code standpoint but are not needed for the product).
So far I'm quite happy with this.
Where does the product description sit in your project so the ai can reference it? Is it like a summary form that describes what the project basically should do or be used for, asking for a friend
It's right in CLAUDE.md
For take #1 I said what tech to use and a high level description of the game and it's features. I guess I failed to mention this part, but when I threw take #1 away, I first used Claude + hand editing to update it to have a detailed description of each screen and feature in the game. So take #2 had a much more detailed description of exactly what was going to be built, but still, right in CLAUDE.md
I did also create a DEVELOPMENT-PLAN.md first with Claude and have been having it update it with what's been done before every commit. I don't know yet have a good idea of how impactful that part has been.
This seems like an unsubstantial article, ironically it might have been written by AI. Here's the entire summary:
AI makes slop
Therefore, spend more time to make the slop "better" or "different"
[No, they do not define what counts as "better" or "different"]
It's crazy to me that people will reference Keynes' prediction of leisure without acknowledging that we chose not to do that. The dystopian way in which work has become more competitive, intensive, and ill-compensated even as economies have supposedly continued to become more productive is the result of policy choices, not some inevitable fact of the universe
People dislike the word slop because it sounds harsh.
But what’s unique today becomes slop tomorrow, AI or not.
Art has meaning. Old buildings feel special because they’re rare. If there were a thousand Golden Gate Bridges, the first wouldn’t stand out, as much.
Online, reproduction is trivial. With AI, reproducing items in the physical world will get cheaper.
> Old buildings feel special because they’re rare.
No. When you have a city full of old houses all from the same era, maybe even by the same architect, the new building still looks ugly. The old house looks beautiful, even when you have hundreds copies next to it.
This article says that the stairs have been turned into an escalator. But I think it’s an escalator to slop.
Therefore, it doesn’t affect my work at all. The only thing that affects my prospects is the hype about AI.
Be a purple cow, the guy says. Seems to me that not using AI makes me a purple cow.
> Therefore, it doesn’t affect my work at all.
But that isn't what the author is talking about. The issues is, your good code can be equal to slop that works. What the author says needs to happen is, you need to find a better way to stand out. I suspect for many businesses where software superiority is not a core requirement, slop that works will be treated the same as non-slop code.
> slop that works
Until that slop that works leads to therac-26 or PostOfficeScandal2 electric boogaloo. Neither of those applications required software superior to their competitors, just working software
The average quality of software can only trend down so far before real world problems start manifesting, even outside of businesses with a hard requirement on "software superiority"
Anyone can say that something works. Lots of things look like they work even though they harbor severe and elusive bugs.
It's so bizarre to me seeing these comments as a professional software engineer. Like, you do realize that at least 80% of the code written in large companies like Microsoft, Amazon, etc was slop long before AI was ever invented right?
The stuff you get to see in open source, papers, academia- that's a very small curated 1% of the actual glue code written by an overworked engineer at 1am that holds literally everything together.
Why is it bizarre? I’m a tester with 38 years in the business. I’ve seen pretty much every kind of project and technology.
I was testing at Microsoft on the week that Windows 2000 shipped, showing them that Photoshop can completely freeze Windows (which is bad, and something they needed to know about).
The creed of a tester begins with faith in existence of trouble. This does not mean we believe anything is perfectible— it means we think it is necessary to be vigilant.
AI commits errors in a way and to a degree that should alarm any reasonable engineer. But more to the point: it tends to alienate engineers from their work so that they are less able to behave responsibly. I think testing is more important than ever, because AI is a Gatling gun of risk.
You are focusing on code. That is the wrong focus. Creating code was never the job. The job was being trustworthy about what I deliver and how.
AI is not worthy of trust, and the sort of reasonable people I want to deal with won’t trust it and don’t. They deal with me because I am not a simulation of someone who cares— I am the real thing. I am a purple cow in terms of personal credibility and responsibility.
To the degree that the application of AI is useful to me without putting my credibility at risk, I will use it. It does have its uses.
(BTW, although I write code as part of my work, I stopped being a full-time coder in my teens. I am tester, testing consultant, expert witness, and trainer, now.)
I've been thinking something similar about any company that has AI do all it's software dev.
Where's your moat? If you can create the software with prompts so can your competitors.
Attackers knowing which model(s) you use could also do similar prompts and check the output code, to speculate what kind of exploits your software might have.
A lawyer knowing what model his opposition uses could speculate on their likely strategies.
The set of commercially successful software that could not be reimplemented by a determined team of caffeinated undergrads was already very small before LLM assistance.
Turns out being able to write the software is not the only, or even the most important factor in success.
I’d suggest reading about competitive moats and where they come from. The ability to replicate another’s software does not destroy their moat.
> This is the leverage paradox. New technologies give us greater leverage to do more tasks better. But because this leverage is usually introduced into competitive environments, the result is that we end up having to work just as hard as before (if not harder) to remain competitive and keep up with the joneses.
Off-topic, but in biology circles I've heard this type of situation (where "it takes all the running you can do, to keep in the same place" because your competitors are constantly improving as well) called a "Red Queen's race" and really like the picture that analogy paints.
https://en.wikipedia.org/wiki/Red_Queen%27s_race
This circumstance is more commonly known as the Jevons Paradox
https://en.wikipedia.org/wiki/Jevons_paradox
Also known as induced demand, and why adding a lane on the highway doesn’t help for long
https://en.wikipedia.org/wiki/Induced_demand
I feel that I understand the leverage paradox concept, and the induced demand concept, but I don't understand how they are the same concept. Can you explain the connection a little more?
More leverage = more productivity = more supply of good and services
The induced remand for more goods and services therefore fills the gap, and causes people to work just as hard as before -- similarly to how a highway remains full after adding a lane
TL;DR relative status is zero sum