> "Nvidia isn't alone, as tech giants have taken measures to push employees to incorporate more AI into their day-to-day work. Both Microsoft and Meta plan to evaluate employees based on their AI usage, and Google told engineers to use AI for coding, Business Insider reported. Amazon was in talks to adopt the AI coding assistant Cursor after employees requested it…"
My company also told us they are tracking how much we use AI and how much we use it will be factored into our yearly performance reviews.
It is interesting because plenty of organizations ban the use of AI in many situations. One client I work for blocks Copilot in VSCode when on their VPN.
Ha! When was the last time anyone took the blame for crappy code? This is an industry with zero accountability for quality. Fail fast right? At least when I tell an LLM it's wrong it says, "You're absolutely right" and gets to fixing it rather than an hour lecture about why they're totally correct and justified because of their version of "best practices".
Well the point is the manager gets the praise/promotion/etc for reducing costs and supposedly improving performance, and then they bounce and leave the company, moving on to the next place, before the long term effects can be evaluated.
If AI were actually any good programmers would have to sneak it in the backdoor,
without the knowledge of management and their "approved software",
like we do with almost any tool we find that really works.
I would never install these ai tools on my computer. It's going to immediately scan and upload my source code. Why would I want them to steal my code? Nothing good can come from that.
> "Nvidia isn't alone, as tech giants have taken measures to push employees to incorporate more AI into their day-to-day work. Both Microsoft and Meta plan to evaluate employees based on their AI usage, and Google told engineers to use AI for coding, Business Insider reported. Amazon was in talks to adopt the AI coding assistant Cursor after employees requested it…"
My company also told us they are tracking how much we use AI and how much we use it will be factored into our yearly performance reviews.
It is interesting because plenty of organizations ban the use of AI in many situations. One client I work for blocks Copilot in VSCode when on their VPN.
Anyone with half a brain knows that AI is unreliable. It can and will make mistakes.
Who gets the blame for this?
It's like insisting that managers hire unreliable people because they're cheaper --- but the managers know they will pay the price for doing so.
Ha! When was the last time anyone took the blame for crappy code? This is an industry with zero accountability for quality. Fail fast right? At least when I tell an LLM it's wrong it says, "You're absolutely right" and gets to fixing it rather than an hour lecture about why they're totally correct and justified because of their version of "best practices".
LLMs are being used for a lot more than code generation.
Well the point is the manager gets the praise/promotion/etc for reducing costs and supposedly improving performance, and then they bounce and leave the company, moving on to the next place, before the long term effects can be evaluated.
> Who gets the blame for this?
The computer, which cannot be held accountable. See how that works?
If AI were actually any good programmers would have to sneak it in the backdoor, without the knowledge of management and their "approved software", like we do with almost any tool we find that really works.
Makes sense an 'AI' chip maker would say that.
I would never install these ai tools on my computer. It's going to immediately scan and upload my source code. Why would I want them to steal my code? Nothing good can come from that.
Hmmm...he almost comes across as desperate. Wonder why...
Why must executives be so fucking obnoxious about shoving AI into every possible orifice?
Because there's a lot of money in it.
>And if AI does not work for a specific task, "use it until it does," he added.