There’s a lot of focus (and money) on scaling LLMs, but while building Twenty [1] we’ve observed that current models are already capable enough for most use-cases. We think that what’s missing today is a new kind of software architecture. One that evolves, learns from feedback, and compile complex patterns to adapt to users. I wrote about this and some of our learnings in the article above, curious to hear your thoughts!
Hello HN!
There’s a lot of focus (and money) on scaling LLMs, but while building Twenty [1] we’ve observed that current models are already capable enough for most use-cases. We think that what’s missing today is a new kind of software architecture. One that evolves, learns from feedback, and compile complex patterns to adapt to users. I wrote about this and some of our learnings in the article above, curious to hear your thoughts!
[1] https://github.com/twentyhq/twenty
reminds me of the "memory for agents is a moat" but for software
memory for software