> Many people believe that programming is an essential skill. In my opinion, programming is just a tool. Modern no-code and low-code platforms have made this skill less critical for entering the field.
> What matters most is the result - how effectively you can use available tools to solve problems.
> If you are comfortable working in a no-code environment and can use it effectively, then you don't need to be a programming expert. I believe that with the advancements in Large Language Models, this skill will become much easier to acquire for those just starting out.
I'm going to push back against this, mainly because of version control.
I've seen data teams that work without version control - everything is done with ad-hoc SQL scripts and Jupyter notebooks and queries shared via Slack or pasted in Google Docs - and it's a huge mess.
The great weakness of no-code platforms in my opinion is that they rarely integrate well with version control.
I think having a culture of version control in a data team leads to a culture of transparency and repeatability, which makes the overall quality of the outcomes significantly higher.
(I just noticed this appears to be a guest post on the blog for a company that sells a proprietary AI-assistant Python notebook app: https://mljar.com/ )
Thank you for comment. Version control is crucial for software development. Data science has a lot of experimentation involved. In my opinion, not everything should be in version control system. My typical workflow is to experiment heavily with Python notebooks, without version control. If I have analysis procedure ready then I apply version control for Python notebooks or re-write code to Python scripts or modules. I think it all depends on what problem you are working on, and the experimentation level.
I'm the founder of MLJAR company, and we are working on AI assistant Python notebook. It is desktop app that aims to help starting in data science.
For me the time for version control is the moment you have an analysis that you're ready to share with other people.
The methodology for those need to be recorded and tracked so others can review what you did and use that to help decide if they can trust your results or not.
>Many people believe that programming is an essential skill. In my opinion, programming is just a tool. Modern no-code and low-code platforms have made this skill less critical for entering the field.
> What matters most is the result - how effectively you can use available tools to solve problems.
This depends on the organization(s) where a DS may work. If a DS is working within an organization with a robust BI/DE/whatever team that does all of the ingestion, normalization, cleaning, and pipelining of data for the DS, programming is far less important and concentration on the results of what is done w/those prepared data is far more important. But, if the DS is working in a bubble, forced to do all of the aforementioned work themselves, programming is an absolute requirement.
Taking a near-perfect prepared dataset and deriving modeled insights requires no huge programming lift from a DS in 2025; they may be able to copy-paste LLM- or Stackoverflow-derived pieces of code and spend their time on the insight derivation and next steps.
But, in many companies, there just isn't a robust and flexible set of teams that do the correct prep work for the DS to do what they do best and some sort of hybridized data/analytics engineer/DS is required to get to those insights--albeit with less time for the modeling, analyses, and outputs.
> Many people believe that programming is an essential skill. In my opinion, programming is just a tool. Modern no-code and low-code platforms have made this skill less critical for entering the field.
> What matters most is the result - how effectively you can use available tools to solve problems.
> If you are comfortable working in a no-code environment and can use it effectively, then you don't need to be a programming expert. I believe that with the advancements in Large Language Models, this skill will become much easier to acquire for those just starting out.
I'm going to push back against this, mainly because of version control.
I've seen data teams that work without version control - everything is done with ad-hoc SQL scripts and Jupyter notebooks and queries shared via Slack or pasted in Google Docs - and it's a huge mess.
The great weakness of no-code platforms in my opinion is that they rarely integrate well with version control.
I think having a culture of version control in a data team leads to a culture of transparency and repeatability, which makes the overall quality of the outcomes significantly higher.
(I just noticed this appears to be a guest post on the blog for a company that sells a proprietary AI-assistant Python notebook app: https://mljar.com/ )
Thank you for comment. Version control is crucial for software development. Data science has a lot of experimentation involved. In my opinion, not everything should be in version control system. My typical workflow is to experiment heavily with Python notebooks, without version control. If I have analysis procedure ready then I apply version control for Python notebooks or re-write code to Python scripts or modules. I think it all depends on what problem you are working on, and the experimentation level.
I'm the founder of MLJAR company, and we are working on AI assistant Python notebook. It is desktop app that aims to help starting in data science.
For me the time for version control is the moment you have an analysis that you're ready to share with other people.
The methodology for those need to be recorded and tracked so others can review what you did and use that to help decide if they can trust your results or not.
>Many people believe that programming is an essential skill. In my opinion, programming is just a tool. Modern no-code and low-code platforms have made this skill less critical for entering the field.
> What matters most is the result - how effectively you can use available tools to solve problems.
This depends on the organization(s) where a DS may work. If a DS is working within an organization with a robust BI/DE/whatever team that does all of the ingestion, normalization, cleaning, and pipelining of data for the DS, programming is far less important and concentration on the results of what is done w/those prepared data is far more important. But, if the DS is working in a bubble, forced to do all of the aforementioned work themselves, programming is an absolute requirement.
Taking a near-perfect prepared dataset and deriving modeled insights requires no huge programming lift from a DS in 2025; they may be able to copy-paste LLM- or Stackoverflow-derived pieces of code and spend their time on the insight derivation and next steps.
But, in many companies, there just isn't a robust and flexible set of teams that do the correct prep work for the DS to do what they do best and some sort of hybridized data/analytics engineer/DS is required to get to those insights--albeit with less time for the modeling, analyses, and outputs.
YMMV.