Hacker Newsnew | past | comments | ask | show | jobs | submit | NerdsCentral's commentslogin

He did not actually say that - but it is the implication of his comment.


Agile focuses very hard on the short term. Hopes and dreams are not short term. Agile is a way to implement innovation but it will not create it. My experience is that the shorter the stories are then the less innovative the code becomes. The really cool stuff gets done at nights and weekends.

But was it not ever so?


I think this is inevitable. Slicing and dicing the problem into a bunch of tiny sub-problems that can be implemented in short sprints is based on the assumption that:

  1. None (or few) of the problems are very big.

  2. All of the problems can be solved without much brain strain.

  3. A fairly straightforward well-known architecture can be used.
Those assumptions are clearly optimized for projects like business applications, Web front-ends, enterprise services, etc. Situations where nothing depends on somebody getting their brain really deep inside a problem and then marinating on and being frustrated by it until eventually a bolt of inspiration knocks them over while they're washing their hair, causing them to hit their head on the washcloth rack on the way down, so that the idea tragically disappears down the drain along with the mingling blood and. . .

ANYWAY. For skunk works projects, we've already got a different strategy called a skunk works team. Nothing wrong with using those. Right tool for the job and whatnot, eh?


Not sure of the connection here. BTW - check out the nerds-central post on why Thorium matters if you are pro-nuclear - it might have some good amo for you to use.


The post is talking about servers. Most scripting is for code running on servers. Servers do not wait for the user. Your comment has zero merit.


no.. your comment and your whole shortsighted video have no merit. Servers spend time waiting for users as well. Whether its waiting for the next HTTP request or the next batch processing job servers wait for people too.


OK, I'll explain this as you clearly have no idea what you are talking about. When one commissions a server or server farm it is done against a service non functional requirement. For example 1 second page serve time at peak load. This means that a lot of the time the servers might be waiting but not a peak load. Never the less, modern servers will clock down if waiting. Now, what sets the peak load is the efficiency of the software and the number of people using the server and/or the run length of the batch jobs. The more efficient the software the less servers are required to meet the peak load and so the less power is used all the time. Further, the more efficient the software the less often servers will have to clock up during off peak times. Do you finally get it? If not - I suspect you have zero idea about large scale computing and are here only to defend your lack of ability to program anything other than scripts.



Most real scientific computing (quantum mechanics - in which I did my doctorate, meteorology, astrophysics etc.) is done in C or more normally FORTRAN. The code will use highly optimised routines like BLAS. It is normal to perform careful inner loop optimizations for each target platform.

Don't talk about things you don't understand.


> Don't talk about things you don't understand.

Fair enough, but you do realize that not everyone will optimize inner loops after switching to C/Java? Certainly not to the extent of scientific computing or even the language shootout snippets.

Also to address the power usage of one server. The 6.6 tonnes number seems to come from having the server drawing 1kw of power continuously for an entire year. The biggest server I could find on Dell's site (E910) comes with a 1.1KW power supply. This DOES NOT mean that it draws 1.1KW continuously. Indeed, outside of peak usage hours it probably draws much less. This is amortized over many users as well. Even slow, interpreted languages can serve hundreds of users on such a server.

Compare this to driving 10 miles/day in a medium-sized car. This produces 2 tonnes of CO2 annually, and unless you're diligent at carpooling, probably serves one person. Even a small car will still use half of that. If some new software helps even a small number of people telecommute instead of driving, the result is overall positive. For this to work, such software has to be cheap (certainly cheaper than driving) and for that to happen programmer time must be conserved.


Good arguments. I based 1kw on the 800 watt rating of the machine mentioned plus a/c etc. It is all ball park and I am a bit shocked you are the only person to call me out. But it does not matter because if you 1 tonne per year - it is still plenty. The car thing is a good point to make but I think you are stretching it with the telecommuting. This is especially so as the key to telecommuting if fast software (ever tried implementing video streaming in an interpreted language?) The amortized over many people point correct but not valid to the argument as it is a multiplicative not additive effect it has as no impact on the over all calculation.


Unfortunately - right now - we do not have unlimited energy. If we did - then the argument simply shifts to the limited mineral resources we have to make the computers...


I was not expecting this to produce so much interest. I guess a lot of people have asked about how to implement try/catch/finally in C++.


Wow - how did it get that badly screwed up? Thanks for pointing that out.


Thanks for upvoting!


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: