> People hold up China as an example but China was not displacing any local industry including its own. It's incredibly easy to do that because it's greenfield. Fast forward 20 to 30 years when new thinking might impact BYD or CATL's bottom line? They may not look so forward-thinking.
I would add that despite joint ventures, China's domestic internal combustion engine industry never really caught up. In fact their best engines were made by wholly domestic companies but those were not nearly as good as those made by Western and Japanese companies.
As Warren Buffet noted over a decade ago, BEV is an opportunity for China to simply skip over all of that and just leapfrog everyone else. So it's even better than greenfield. It's green field for them while allowing them to completely disrupt existing foreign competitors.
> A sow will absolutely lay down on her piglets and suffocate them.
This makes me really curious because that behavior seems very maladaptive for a species. That leads me to wonder if something else, ie. the environment or domestication, is leading to this behavior rather than pigs being really, really prone to wiping out their own species. Does anyone know why they do this in a farm environment?
Pigs breed like rabbits so their evolutionary path hasn't been to ensure individuals survive at the highest possible rate, their path was to have a dozen babies at a time so that even if 80% of them get killed or eaten, their population still grows and thrives. For a farmer losing 20% of their pigs because the mother sat on babies and suffocated them is a massive loss of money, for a wild pig it doesn't matter as much because 3x more will get eaten by predators and there is already another dozen on the way within a week or two of giving birth to the first litter.
Some of the loss likely is due to keeping them penned up, however there are also losses for not keeping them penned up and letting baby pigs run among a herd of many adult pigs, some of which will attempt to kill piglets, especially females who have not had piglets yet. Pigs can be absolute viscous as hell and will readily eat other living animals if they think they can get away with it, including other pigs, and some mother pigs have been known to cannibalize their young even under ideal situations. Pig farmers have themselves been killed by pigs from passing out or getting knocked out in pig pens and the pigs seeing them as a free meal not to be wasted.
This is exactly how effective censorship works. For example, what most people don't understand about Chinese censorship is that the foundation of their system is that everything is attributable to someone eventually. So they start by targeting anonymity. Then when something they don't like is published and gains traction, the originating party and the major distributors are punished -- sometimes very publicly. The chilling effect is that people will learn to self censor. Oh and they keep the rules really vague so you always err on the side of caution.
CBS self censoring is basically the same thing.
The Chinese government can then say "What censorship?" or "It's rare" and now the FCC can do the same.
Playing whack-a-mole is not a good strategy for censorship. The chilling effect of self censorship is the winning strategy.
Hesai has driven the cost into the $200 to 400 range now. That said I don't know what they cost for the ones needed for driving. Either way we've gone from thousands or tens of thousands into the hundreds dollar range now.
Looking at prices, I think you are wrong and automotive Lidar is still in the 4 to 5 figure range. HESAI might ship Lidar units that cheap, but automotive grade still seems quite expensive: https://www.cratustech.com/shop/lidar/
Those are single unit prices. The AT128 for instance, which is listed at $6250 there and widely used by several Chinese car companies was around $900 per unit in high volume and over time they lowered that to around $400.
The next generation of that, the ATX, is the one they have said would be half that cost. According to regulator filings in China BYD will be using this on entry level $10k cars.
Hesai got the price down for their new generation by several optimizations. They are using their own designs for lasers, receivers, and driver chips which reduced component counts and material costs. They have stepped up production to 1.5 million units a year giving them mass production efficiencies.
That model only has a 120 degree field of view so you'd need 3-4 of them per car (plus others for blind spots, they sell units for that too). That puts the total system cost in the low thousands, not the 200 to 400 stated by GP. I'm not saying it hasn't gotten cheaper or won't keep getting cheaper, it just doesn't seem that cheap yet.
From the article, "its productivity software is used by hundreds of millions of corporate users, a captive audience to whom it can easily promote new AI products."
Their end users are what they ultimately sell. They are captive audiences. This is what monopolies/platforms do. It's never been part of MSFT's DNA to care that much about end user experience. Who they really cater to are the IT decision makers, etc. These people can then show some numbers about "AI adoption" and "productivity" gains on their Power Point slides presented to their bosses. MSFT's value is delivering that to them.
weirdly enough, we asked microsoft to help us build these reports and give us insights into these numbers. The ones in our country were utterly incapable and just send screenshots of powerbi reports from the US team.
So yeah, it really is completely broken internally. monopoly abuse to the fullest, we weren't even allowed by our CTO to do an RFP with potential copilot competitors, and the license cost for 5000 users is insane
You can fine-tune the sensitivity via the PII_ENTROPY_THRESHOLD environment variable.
If you consider UUIDs to be sensitive in your context (or if you are getting false positives), you can adjust the threshold. For example, standard UUIDs have lower entropy density than API keys, so slightly tuning the value (e.g. from 3.8 to 3.2 or vice-versa) allows you to draw the line where you need it.
Is there a way to tell it to just recognize UUIDs and redact those without adjusting the threshold? In our case, UUIDs is just an exception. I think all the other stuff you're doing is correct for our situation.
Currently, no — the scanner focuses on entropy and specific Key Names, not value patterns (Regex).
However, if your UUIDs live in consistent fields (e.g., request_id, trace_token, uuid), you can add those field names to the Sensitive Keys list. This forces redaction for those specific fields regardless of their entropy score, while keeping the global threshold high for everything else.
That said, "Redact by Value Regex" (to catch UUIDs anywhere) is a great idea. I'll add it to the backlog.
Does Google have the hardware design expertise needed to compete? If they don't already posses that then it is quite a dilemma because they would need to either buy a top notch handset maker and hope that can be competive with the other Android makers. Or build it up themselves. And all this has to happen while competing with other Android makers, who will be very wary of Google. I also don't know that Google needs specific Android phones to be the best or most popular to win the things they care about. Phones are just platforms for them. Android ensures no one has a chokepoint on that.
I have had recent iPhones, Pixels, and a Samsung phone, all high end. I'm a bit biased, but I do honestly think that Pixels are better or the same build quality compared to Samsung. The software is better for me too, but I accept that's a lot of personal preference.
I think the iPhones are out in front a little, but in a way that I'm not sure really matters. I loved the iPhone hardware I've owned, but the difference in build quality isn't noticeable unless you look carefully and isn't noticeable in a case. The only way I'd say it's noticeable is if you're a hardware nerd who knows how the things are manufactured, or if you get a repair bill. What Apple have done with iPhone hardware is a huge achievement, but said as someone who likes owning nice things, I'd happily take a Pixel 10.
Google bought out HTC 8 years ago to the day, and if I recall correctly that exacerbated a lot of the tension in the Android OEM space that the original Google Pixel rollout caused in the first place.
Very, very different tools, though they cover similar areas.
Temporal - if you have strict workflow requirements, want _guarantees_ that things complete, and are willing to take on extra complexity to achieve that. If you're a bank or something, probably a great choice.
Oban - DB-backed worker queue, which processes tasks off-thread. It does not give you the guarantees that Temporal can because it has not abstracted every push/pull into a first-class citizen. While it offers some similar features with workflows, to multiple 9's of reliability you will be hardening that yourself (based on my experience with Celery+Sidekiq)
Based on my heavy experience with both, I'd be happy to have both available to me in a system I'm working on. At my current job we are forced to use Temporal for all background processing, which for small tasks is just a lot of boilerplate.
I’ll say that, I think this varies by language/SDK - at least with the Temporal TypeScript SDK, a simple single idempotent step background task is however many lines of code to do the actual work in an activity, and then the only boilerplate is like 3 lines of code for a simple workflow function to call the activity.
We migrated from Celery to Prefect a couple of years back and have been very happy. But ours is a small op which handles tasks in 1000s and not millions. It’s been night and day in terms of visibility and tracking. I would definitely recommend it.
It’s a heavy weight that covers a lot of use cases. But we just run simple ProcessWorkers for our regular needs and ECS worker for heavier ML tasks.
I'm just coming back to web/API development Python after 7-8 years working on distributed systems in Go. I just built a Django+Celery MVP given what I knew from 2017 but I see a lot of "hate" towards Celery online these days. What issues have you run into with Celery? Has it gotten less reliable? harder to work with?
Celery + RabbitMQ is hard to beat in the Python ecosystem for scaling. But the vast, vast majority of projects don't need anywhere that kind of scale and instead just want basic features out of the box - unique tasks, rate limiting, asyncio, future scheduling that doesn't cause massive problems (they're scheduled in-memory on workers), etc. These things are incredibly annoying to implement over top of Celery.
We don't hate Celery at all. It's just a bit harder to get it to do certain things and requires a bit more coding and understanding of celery than what we want to invest time and effort in.
Again, no hate towards Celery. It's not bad. We just want to see if there are better options out there.
But if you are too small for celery, it seems a hard sell to buy a premium message queue?
My top problem with my celery setup has always been visibility. AI and I spent and afternoon setting up a Prometheus / grafana server, and wiring celery into it. Has been a game changer. When things go crazy in prod, I can usually single it down to a specific task for a specific user. Has made my troubleshooting go from days to minutes. The actual queue and execute part has always been easy / worked well.
Most Django projects just need a basic way to execute timed and background tasks. Celery requires seperate containers or nodes, which complicates things unnecessarily. Django 6.0 luckily has tasks framework -- which is backported to earlier django versions has well, which can use the database. https://docs.djangoproject.com/en/6.0/topics/tasks/
Django 6's tasks framework is nice but so far it is only an API. It does not include an actual worker implementaiton. There is a django-tasks package which does a basic implementation but it is not prod ready. I tried it and it is very unreliable. Hopefully the community will come out with backends for it to plug celery, oban, rq etc.
Could you say a bit more about "it is very unreliable"? I'm considering using django-tasks with an rq backend [1] and would like to hear about your experiences. Did you find it dropping tasks, difficult to operate, etc.
I like celery but I started to try other things when I had projects doing work from languages in addition to python. Also I prefer the code work without having to think about queues as much as possible. In my case that was Argo workflows (not to be confused with Argo CD)
reply