Looking at Python packages, or any developer-facing form of software, is not a good indicator of AI-based production. The key benefit of AI development is that our focus moves up a few layers of abstraction, allowing us to focus on real-world solutions. Instead of measuring Github, you need to measure feature releases, internal tools created, single-user applications built for a single niche use case.
Measuring python packages to indicate AI-based production is like measuring saw production to measure the effectiveness of the steam engine. You need to look at houses and communities being built, not the tools.
I love shell tools, and by no means disparage the use of zoxide, z, etc. But I find I get 90% of the usefulness of these tools using the native cd command and adding my most used directories to CDPATH.
This additionally is consistent and works without needing to “train” it first.
I agree, but I get 100% of the usefulness of these tools by installing them with one command and using them. Why settle for 90% when 100% takes a second?
Fair point. I don't disagree. I personally just like to stay as close to default GNU/Linux tooling as is practical. It's a matter of personal taste.
For me, this makes it so my expectations and muscle memory transfer cleanly between my workstation and other servers, devices, etc. I find the default tools are much more powerful than is often understood, and you can replicate most third party functionality fairly easily. That's not always the case mind you, and I happily use those tools.
the next version will come with plasma micro jets ejection from the cutting edge - convenient for civilian use. I'm also wandering about an array of shaped micro-charges on the edge - should be able to cut through several millimeters of steel in one move, etc. so a soldier can cut though an opponent's ballistic vest or a into an lightly armored IFV.
You can have it sharp with shaped charges array sitting a bit deeper and sideways, and new edge breaking out as side effect of shaped charges exploding, rinse and repeat, about 5 times for a regularly sized sword.
Even more interesting alternative is to have some quick blade switching machinery to switch on-the-fly between the actual blade edge and the shaped charge array and to add some feeding machinery of shaped charges into the array (and to have some stretch-shaped charges instead of the rounded ones)
I think AI is simply exposing problems with academia that have always been there. In my personal experience with both high school and a completed bachelor's degree, 20% of the process is actual learning while 80% is proving what one has learned for the sake of grading and measuring.
As soon as one graduates and enters the real world, the ability to learn is paramount, but the ability to grade said learning is never used again. We need to re-think the system from the ground up so that a student can leverage all available tools, AI included, and still develop a core ability to learn.
What's more, the current focus on grading has been shown to stunt the love of learning, because we're not stupid and we know when we're doing something that does not gain us anything beyond a grade.
If academia responds to this change properly we can eventually see a system that actually serves our students better than what we currently have.
So are the teacher supposed to stop grading the students works?
Some people never leave the academia, and that is not a bad thing. What kind of research would done if there is no academia around for that?
If people barely learn how to read, write or think critically how are they expected to handle predatory companies?
How can we expect students to learn anything, if they are using tools which cannot be trusted to tell the truth?
There is not one educational system. So if US changes their system according to your idea, what would happen to us students if other countries does not follow suit? Will they fare better or worse?
Why should they study if they would get the idea that AI can help them with everything?
Strong agree. This reminds me of one of my pet theories: that research and education are fundamentally different skills. A good researcher should be flexible and open-minded, almost to a fault, but a good educator needs to be committed to certain beliefs in order to teach them. More important, an educator should instill good habits (even if those habits involve asking good questions) and set a good example, a requirement entirely lacking from research.
So why do all of our universities only employ teachers who have been trained as researchers?
I think much of the 80% grinding that you describe is just the publish-or-perish mindset of graduate school, which the teachers pick up along the way (I'm not faulting them so much as the process). It's more about appearing to know, rather than knowing. This may be what you have to do to survive in a competitive research environment, but one is left wondering what any of that has to do with educating our children, especially the majority who will never become researchers.
There's no way I could say what percentage of my schooling was "actual learning" versus assessment or "busywork." The modern obsession with standardized testing and constant measurement has certainly made things worse.
I disagree that "ability to grade said learning is never used again." Stack-ranking is very much a thing; the grading just gets fuzzier.
What the article points to is that, when a teacher gives an assignment meant to encourage students to think and learn, e.g. picking themes out of a novel, most students completely miss the point, instead getting an LLM to generate words in the shape of a student essay. They're crippling their future selves to save some time.
What would I do if I were a teacher? My impulse would be to make all assignments ungraded or pass/fail. Students could choose to learn or cheat as much as they liked. But then their grades would depend upon a midterm and a final, either oral or written in-person. For the ones who had been cheating their way along, the midterm would hopefully be a wake-up call, and they could redeem themselves on the final.
Every time I see news about cities discovered in south america I look to see if the dates line up with the Nephite/Lamanite civilizations of the Book of Mormon (roughly 600 BC to 400 AD). This is the closest yet that I've seen. I'm eager to see what they discover in the coming years about their culture and government. Will we find traces of their Judge-based government? Signs that they worshiped Christ and practiced Jewish rituals? It will be interesting to watch this investigation unfold.
to me it seems notable that the conversation turns into comparisons of early Western things.. because of course, it is very unlikely that people on this forum have very much reference of any kind to these very recent science facts. "Pre-columbian art" has been a stable of artistic and intellectual circles for long time.. it is easier to look at one stone figure and wonder
They actually just released a new, rubust getting started guide that is an EXCELLENT place to start learning the framework: https://guides.rubyonrails.org/getting_started.html. I highly recommend starting there.
The new getting started guide is great. Chris Oliver is a great teacher. Afterwards, if you want a 102-style tutorial, I'm a big fan of https://www.hotrails.dev/. It goes step by step and explains how Hotwire works and how to use it.
Rails has so much to offer these days, going vanilla really is a viable (and enjoyable) option. Here at Swivel we use the ENTIRE rails stack from Hotwire to Kamal and it is so simple and easy to maintain and deploy. I do think there are use cases where adding a React frontend is good though. But you don't have to, Rails has everything you need so you can focus on creating value for customers instead of plumbing.
Measuring python packages to indicate AI-based production is like measuring saw production to measure the effectiveness of the steam engine. You need to look at houses and communities being built, not the tools.
reply