Hacker Newsnew | past | comments | ask | show | jobs | submit | mem0r1's commentslogin

The argumentation is flawed.

1. Just imagine the required charging infrastructure if all vehicles were suddenly battery powered. Charging electric vehicles requires quite some power. Can the power grid installations sustain that (power lines, Transformer stations...) ? What resources (financial and physical (e.g. Cooper)) are required in order to adapt the infrastructure ? Just an example: A supercharger station which can charge a few cars at "full speed" simultaneously can draw > 1 Megawatt. Furthermore, superchargers are quite complex (and expensive) technology;

2. Batteries are not a suitable large scale energy storage. However, non-dispatchable, fluctuating energy sources such as solar and wind power require huge amounts of storage in order to sustain the power demand.


> The argumentation is flawed.

Which argumentation are you referring to? You're re-iterating an anti-electric talking point that is explicitly debunked in the article.

> “That's nonsense,” says Liebreich. “[In] 1995, [people said] ‘we'll never use the internet because there are not enough modems’. [In] 2000, ‘we'll never do online video because there isn't enough bandwidth’, then, ‘you can't do multiple streams of video because you will never get fibre to the home’. We’ve got 30 years between now and 2050 [when countries plan to reach net-zero emissions] and we will simply have more and more investment. We’ve dug up the streets for cable, phone, gas, cable, fibre, electricity. It's a thing we do. We know how to just build slowly over time. This is not rocket science.

> “Plus, there's smart charging. And of course, we know we're going to be doing this because we're also going to be having to add capacity because of electric heating. And so the idea that you'll say, ‘no, no, we mustn't do that extension of existing infrastructure, we must build a completely new one [for hydrogen refuelling], it's nonsense, frankly.”


https://advocacy.consumerreports.org/research/blog-can-the-g... ("A question that frequently comes up when discussing electric vehicles (EVs) is: “Can the grid handle it?” The short answer is “yes.”") [Blog post demonstrates the math showing the grid can handle 100% EVs]

Superchargers are primarily for road trips and people who need to fast charge because they don’t have home charging (for now; infra is rolling out very fast). Most will charge at home, work, or other locations that perhaps have a level 2 charger (vs a fast DC charger). A 120V 15amp outlet is sufficient to fully charge your vehicle if left for 2-3 days or more at an airport or other longer dwell location.

> Batteries are not a suitable large scale energy storage.

They will get us most of the way to success. Some combination of seasonal storage, renewables overbuilding, transmission, and limited fossil generation (peakers, cogeneration, etc) will be needed to get close to 100% net zero.

https://www.tesla.com/ns_videos/Tesla-Master-Plan-Part-3.pdf

https://www.energy-storage.news/nrel-rapid-growth-of-energy-...

https://www.nrel.gov/docs/fy22osti/81779.pdf

Enough sunlight falls on the Earth every ~2 minutes to power humanity for a year. We're just arguing shuffling the electrons around.


> Most will charge at home, work, or other locations that perhaps have a level 2 charger

with the push for cheaper, multi tenant housing where you have many households sharing the same parking lot, how does charging at home work? How do you bill the person charging? What if every spot has its own charger?


Re 1: The US built 1,000,000 new homes last year without anyone saying "how we the country cope with this increase in need for wood/etc". Most new homes will have multiple 50A circuits to power an electric range and/or HVAC system. Most level 2 electric car chargers max at 50-75A, and can be configured to only charge during times of low grid power demand. Every electric car in existence won't be DC fast charging all the time. In fact, I'd wager for most EV owners in the US today, DC fast charging is a very very low percent of their total charge allocation (between DC fast and AC level 2).


We have plenty of wood, go outside and see for yourself. We currently have very visible issues with the power grid, and as you’ve pointed out you’ve doubled every houses power demand.

Let’s look at what happened in TX, when the grid shut down. How will people charge their cars to leave the area? Maybe people used the gasoline engine in their car to provide heat.


TX has a mismanaged power grid with no interconnects to load share by choice, how does that apply to the rest of the US? CA’s problems aren’t demand driven.

I live in a state that gets almost its entire gasoline supply from a single pipeline (Colonial) that has burst or leaked and left us with mass shortages several times in the last decade.


Yet your problem is not nationwide, at least two states are having issues.


Re 1: Charging mostly happens overnight where there isn't a lot of demand. Smart meters can distribute the demand to spread it out optimally. BEVs are actually very valuable for the electrical grid when they can feed their power back into it - further balancing supply and demand.

Re 2: Batteries keep getting cheaper due to economies of scale. Batteries for grid stabilization don't need high energy density and will soon (or maybe already are) more cost effective than pumped-storage hydroelectricity.


Your argument is textbook anti-EV propaganda that has been debunked for over a decade now.

Ask anyone that owns an EV how many times a month they go to a super charger. The answer is probably zero because unless they went on a road trip, they are definitely charging at home. You don't even need to install a 240V charger. For most people's commute a 120V charger (which can charge at ~1.4 kw) can recharge about 40 miles over night which is equivalent to the average American's commute.

It's still not ideal for apartment dwellers yet, but they don't need a super charger in their complex. They just need a standard power outlet accessible from their parking space.

I know managing the grid is probably really hard. But if we can power every home in America in less than a century, I'm pretty sure we can figure out how to add standard power outlets to parking garages without crashing the economy.

EV's are in fact a solution to the sporadic nature of renewables since they can charge anytime the car is sitting still such as for the 8 hours of sunshine that people spend in office buildings every day.


Source for either of these claims other than "just imagine?"

Of course infrastructure is a concern but I haven't seen any sources that claim we are unprepared to sustain significant EV deployment - and of course it will not ever happen overnight, so that's a strawman.


have you not paid attention to the rolling blackouts in CA? or what happened in TX recently?


1. Yeah, we upgrade the grid over time as demand increases, exactly as we have been doing. Nobody gets bent out of shape when a grocery store or something goes in and it requires power. This is only a problem if vehicles are "suddenly" battery powered, which is not a thing.

2. Yes they are.


You forgot that the power station is a centralized point of failure, where as the fuel distribution network is not. This make the power stations a much more valuable target as you can easily disable a country’s transportation infrastructure.


I wonder why Apple does not include a hypervisor in iOS, and "risky" processes such as iMessage, Safari (maybe a Secure Safari version) could then be executed in a separate virtual machine. The hardware (CPU + RAM) in the iPhones these days should be able to sustain it. Or would there be serious drawbacks to this ?


I guess degrading battery life is the key here. The battery life is already bad enough and the competition is high to make the battery last longer


I'm not from the US and I am not a native English speaker.

The federalist papers are, in my opinion, a true work of art. I find the linguistic elegance and finesse of these texts highly admirable.

Unfortunately, to my dismay, the usage of such "complex" language is discouraged nowadays. Or to express it in the appropriate linguistic style: Our modern era seems to espouse a predilection for accessibility and brevity, often at the expense of the stylistic grandeur and intellectual richness that was once the hallmark of our written and spoken discourse.


It's really nice to see that C++ implementations in the ML field are growing. Personally I do not like Python very much. C++ does not force you to do things in a specific (Pythonic) way.


Growing? You're always free to skip Python and reach out for the actual library instead of Python bindings.

And since C++17, it is quite easy to write Python like code.


Just wondering how this guy finished his MIT Physics undergrad studies without reading a book ? LOL


Its actually an article from 2011/2012. "Avoid meat" -> refers to "the china study" -> deeply flawed. I don't think there is scientifc consensus supporting this claim. There are actually studies which state quite the opposite.


I tend to disagree. (Modern) C++ is an incredibly powerful programming language. Contrary to some other languages it gives the developer maximal freedom and does not impose a particular way of doing things on the developer.


> Contrary to some other languages it gives the developer maximal freedom and does not impose a particular way of doing things on the developer

You seem to be implying that's a good thing. It's literally not.

Imagine if I created a programming language where every random string of characters was a valid program (cue the Perl jokes). Clearly this language permits even more freedom than C++, but this would be a nightmare for programming.

Constrained structure is essential to programming. Sometimes you need to beyond the constraints of a more common language, in which case C++ might be a good choice, but that's the exception not the rule.


Your assumption seems to be that if: 1) an existing language has finite utility 2) a language where any string is a valid program has no utility

, that it must be true that utility decreases with the unconstrained-ness of a language (and thus increases with more constraint).

However, this is not true. You only have to look to the other extreme to see there must be a middle ground. A language where there is only one valid program has no more utility than one where any string is a valid program.

Because this relationship of constraint and utility is clearly not simple, we can't use those extrema to judge if C++ is less useful because it gives so much control. There might be some "local extrema" where a language fits a niche. C++ might fill that niche, or it might not, but I think it needs a bit more of a nuanced consideration than "less constraint, bad".


> Your assumption seems to be that if: 1) an existing language has finite utility 2) a language where any string is a valid program has no utility, that it must be true that utility decreases with the unconstrained-ness of a language (and thus increases with more constraint).

The converse is actually the OP's argument, ie. that a language's utility increases with unconstrainedness. I merely showed that to be false, and argued that constraints are essential, but nowhere did I suggest that utility scales with the number of constraints.


>"You seem to be implying that's a good thing. It's literally not."

I think it is.


[flagged]


TIL that "engineering" is "cult thinking".


Engineering is an activity, not a restriction on activity.


In theory, yes. In practice few people use C++ fully, too often you find in-house "style-guides" vetoing specific things, such as Google's famous "no exceptions".

At the point you rule out using available facilities of the language you might as well use something else.


> (...) such as Google's famous "no exceptions".

If I recall correctly, Google's rationale regarding exceptions is that their legacy code is not exception-safe, and so they were faced with the choice of either rewriting critical parts of their legacy code to handle exceptions, or don't use them.

Also, their "no exceptions" rule only applied to work involving their legacy code.

I'm too lazy to find the source, but that bit of trivia was already discussed ad nauseum even in HN.

The morale of the story is that you should not mindlessly repeat any opinion without knowing the rationale and instead pulling appeals to authority to cover up the logical hole. That's how you end up contradicting even your source, just because you believed that's how the cool kids do things.


> Also, their "no exceptions" rule only applied to work involving their legacy code.

That's not the case, the no exceptions rule applies to legacy and non-legacy code: https://google.github.io/styleguide/cppguide.html#Exceptions.


(Opinions are my own)

> Also, their "no exceptions" rule only applied to work involving their legacy code.

The reasons for not using exceptions are outlined in this post: https://abseil.io/tips/76

absl::Status is basically exceptions without language sugar. Or, another way of looking at it, golang err in C++. Basically, non-local control flow is dangerous as it can't be evident to the programmer when something deep in the stack will bubble up an exception.

It is super annoying to deal with sometimes but there are a lot of amazing helper macros (yuck for other reasons) that exist. More details can be found looking through here: https://cs.opensource.google/search?q=ASSIGN_OR_RETURN&sq=


> Basically, non-local control flow is dangerous (...)

This is the crux of the error you're making. Exceptions are not about control flow. Exceptions are a transparent and clean way to handle exceptional events. Exceptions are not intended to, say, handle the status code of a HTTP requests. Exceptions are intended to handle exceptional and potentially unrecoverable errors in a safe and controlled manner, such as failing to allocate memory, regardless of where and how they pop up.

Therefore, suggesting classical C-style return codes or specialized monadic types to handle results as alternatives to exceptions completely misses the whole point of exceptions and, more importantly, all the classes of problems they are designed to eliminate.


> Exceptions are a transparent and clean way to handle exceptional events.

I'd sum up this argument as: Exceptions are syntax sugar that allow callers of a function to ignore the error cases of something they are calling and depend on something that calls into them to handle the error case.

An example:

    void HandleRequest(const Request &request, KV &kv) {
        ...
        kv.set("a", 10);
        ...
    }
In this example KV.set will raise an exception. The programmer implementing HandleRequest isn't directly exposed to this fact and so to them, and the reviewer, and future onlookers this code looks correct. Now lets say kv.set() throws an exception in production under a specific case. Maybe there were two people attempting to set the same value at the same time or a networking issue. Doing this in the context of a webserver might make sense as the webserver might handle exceptions as error codes but that's not the end-all-be-all. Suppose we refactored to something like this

    Something CreateSomething(....);
How do you know if this function will throw an exception? How do you find all of the possible exceptions that can be thrown? Statically you can't really. If instead you see

    absl::StatusOr<Something> CreateSomething(...);

You can tell for sure that the result has some error that needs to be handled. Your original code:

    kv.set("a", 10);
This code no longer compiles in an absl::Status world. Instead you'd need to do something like:

    CHECK_OK(kv.set("a", 10)) // this will panic
    RETURN_IF_ERROR(kv.set("a", 10)) // Bubble up

From this we get:

1. a stack trace since RETURN_IF_ERROR adds metadata about the call site.

2. Guarantee that code will not compile if errors are not handled.

3. Guarantee that future readers know that some very high level function could probably call into code that can produce an error you need to handle.

This matters much more if things like this are happening:

    kv.beginTransaction();
    kv.set(...);
    kv.endTransaction();

This could be handled by destructors in this case but in other cases:

    otherService.startingWork();
    kv.set(...);
    otherService.doNextStep();
There are cases where destructors do not make sense. You do not always want to call `doNextStep()` as it would be 100% wrong in the case where we cannot set our value in our kv store. Contrived but I've run into these in real life services. If a developer sends me the above code snippet I might LGTM. If the developer instead sends me:

    otherService.startingWork();
    if (!kv.set(...).ok()) { log("something went wrong"); }
    otherService.doNextStep();
I'll be able to point to the exact problem with this code much more easily. Also if there's an outage and I need to read this code I can clearly see why this `something went wrong` in the logs correlates to incorrectly called doNextStep().

I'm not saying that Status is perfect (I'm not 100% sold) but exceptions are a type of control flow in an abstract sense. The problem some people have with it is it's control flow you can't audit.


No.

Exceptions are a way to ensure that failures are directed immediately to a place designated to deal with such a failure.

They are, in particular, not any sort of "syntax sugar", unlike "?" in certain other languages, or your StatusOr thing. A function that throws an exception does not, in any sense, return to its caller. It does not construct any sort of return value. It does not consult the stack to see where it came from and resume running there.

And, exceptions are totally auditable. There is never any hint of ambiguity or uncertainty about where an exception will take you.

You can be confident that if a function was thrown from, it was because it could not perform the requested action. And, you can be confident that if an exception was not thrown the function called satisfied whatever postconditions it promised.

So, you don't need to know if a function might throw. You may instead assume any function might throw. If it doesn't, then it has satisfied its documented postconditions. Your obligation is only to ensure that destructors clean up any intermediate state on the way out. These identical destructors get exercised every time through the code, so are exercised frequently.

Error-handling code at places where the error cannot actually be dealt with properly, that just tries to propagate the failure up the call chain, is typically not well tested, and often cannot even be triggered in testing.


I guess we can consider AOSP legacy code then, given how they use C++.


Yes. Legacy in every sense, including "abandonware".


Educate yourself on Android source code.

You can even start by the new entries related to Rust.

https://source.android.com/setup/build/rust/building-rust-mo...


Why would I care about anything on that page?

I have no need to read hype about Rust, regardless of where it might run. And, I have no desire to build Android apps, in any language.


To educate yourself about what ISO C++ companies are actually doing with the language.

The reference to Rust was just an example on how such members are also looking for alternatives, other big names are looking into Swift, C#, whatever.

Meanwhile clang crawls along on its support for newer ISO C++ features, as the biggest contributors now have their focus elsewhere.


> Meanwhile clang crawls along on its support for newer ISO C++ features, as the biggest contributors now have their focus elsewhere.

What did you mean by this? I am not an LLVM expert or anything, but I find more-often-than-not most of the new features I read about that make its way into LLVM are requirements needed due to clang adding things from new C++ standards. These improvements then benefit other frontends like the one for Rust.


Google and Apple were the biggest contributors.

Well, for Apple, C++ is relevant only in the context of Metal shaders (a C++14 subset) and IO/DriverKit (an embedded C++ like subset), everything else is about Objective-C and Swift, with C++17 being good enough for whatever else they need.

Likewise on Google style, it is all about C++ guidelines at Googleplex, where even not all C++17 features are welcomed.

Everyone else seems more interested in C++ features for their own platforms rather than contributing to upstream (thanks license), so there you go.

https://en.cppreference.com/w/cpp/compiler_support/20


Indeed the details matter, but my example was just designed to be something that might be somewhat more recognizable than those situations and restrictions I've encountered personally.


> (...) my example was just designed to be something that might be somewhat more recognizable (...)

If your goal was to use that poor example to support the idea that people stick with subsets of C++ because of reasons, that example failed to support the assertion. Thus it makes no sense to stick with a patently wrong observation just because it's easier to recall.

Getting back to the topic, as far as I know there are only two features of C++ which are up for debate regarding their adoption: exceptions, and template metaprogramming. The exception-handling debate only makes sense in very low-level applications and refactoring legacy exception-less code, which in practice is not anyone's case. The template metaprogramming debate typically boils down to YAGNI and the need to avoid resume-driven development. Nevertheless, both features are used extensively, whether directly or indirectly (see STL), and in general there is no reason to bother debating whether people should use it or not, unless you have very specific requirements in mind (I.e., avoid generating magical code in embedded applications, or in very high performance applications where you feel you need tight control over everything down to which instructions are generated).


>"In practice few people use C++ fully"

Because there is no need to. C++ is vast and there is no point in exploring all possible ways of doing something once decent path exists.


Practical, old languages tend to do this. In C++ or Common Lisp, which share little other than being multi-paradigm (unopinionated), it is fairly common to have house styles or accepted subsets.

Languages that try to build in some "house style" are my personal dystopia - such as early Java or current Go. Which does not say they ate not effective, just that I personally hate the philosophy.


Both C++ and Common Lisp suffer a huge accumulation of historical baggage. It is this that makes the languages problematic, not being multi-paradigm. Common Lisp has first/rest as well as car/cdr. It has streams and numbers and the functions on them seem generic but aren’t (always) generic functions. It still has rplca despite (setf car) being a valid function name.


Sure; but "historical baggage" is a function of multi-paradigm and outliving multiple generations of computer architecture. I would not view baggage as necessarily problematic. What makes C++ problematic is that template metaprogramming evolved in a very ad hoc way, and now we need to backwards compatible all of it.


My claim is that these multi-paradigm languages may be better thought of as a core plus some paradigm-specific sublanguages, all jammed into the same syntax and glued together in random places. A better multi-paradigm language could avoid having separate parts that poorly work together. The thing that the comment I first replied to called ‘multi-paradigm’ is I think really the bad glue job like a plate that was smashed, glued together, smashed, and then glued together a second time. I don’t think it is an essential quality of multi-paradigm languages. For example Raku (fka perl6) fits different paradigms together more smoothly, and Julia can be used in a procedural or functional way as well as a more object-oriented way (though Julia’s objects tend not to contain as much state as a typical object-oriented language)


> In theory, yes. In practice few people use C++ fully, too often you find in-house "style-guides" vetoing specific things, such as Google's famous "no exceptions".

Almost never seen that in practice. I always see people talking about it in forums, but in real-world gigs I don't know people who artificially restrict their codebase with braindead rules like that.


You haven't seen safety relevant code then. High SIL and ASIL levels (combined with those systems being embedded) result in such rule sets for a good reason.

I have yet to see more than "print and bail out" in catch blocks. In embedded there is nobody who can read your cry for help and especially in fail-op systems this is just not an option.

Herbceptions are just not there yet and until then we help ourselves with things like "expected" for example.


Enjoy the source code of Android, Windows and macOS frameworks.

In fact probably yet another reason why Apple and Google aren't in an hurry to improve clang to latest ISO, and other companies in clang ecosystem even less.


Are there any usable linters around to enforce such rules?


Static Analysers like Coverity, KlocWork, QA-C++ will do. Usability is... Well... Subjective.


More than the usability, the biggest problem is getting everyone onboard.


I don't think this is true. My experience hiring C++ developers does not confirm.


Ideally yes, in practice there are all kinds of idioms on the wild.


I really wonder if the CPU cores are able to access the memory with the specified high bandwith or if its just for the GPU cores.


Yes, it is.


Excellent news, hopefully other European countries will follow that example. Especially Germany should immediately rethink and stop phasing out their remaining operational nuclear power plants.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: