Hacker Newsnew | past | comments | ask | show | jobs | submit | Gazk's commentslogin

I remember playing Hard Drivin on the c64, it was a terrible port. The spectrum version was great though.

Spiritual successor? In terms of realistic car modeling, I would say the rFactor series.


In terms of modeling I'm sure you're right. Something about Hard Drivin' makes it feel not like a straight up racing game. It was an odd combination of physics + disembodied stunt tracks.

The combo of a sim, but a playful environment seems very hard to find.


I see your point, maybe something like BeamNG.drive when it is finished.


I think you found it :)


There is OWIN for .NET which is the equivalent of Connect / Rack / WSGI. So hopefully in the future it won't be reliant on IIS.


This looks like a great read. It must have been challenging to do a decent arcade conversion for the 8 bit home computers. The spectrum was probably slower than the z80 sound cpu in the arcade board.


It will be interesting to hear how much noise an F-E car will produce. It's a big part of the atmosphere at an F1 event. I've seen some TTXGP bikes race and it is eerily quiet.


This is the impression I had as well. With all the factories and services and dependency injection.


The company I work for is going in the opposite direction. I'll be expected to learn PHP soon after spending years developing with .NET. I'd much prefer to be switching to Ruby or Python. Anyway if you want to switch to Ruby on Rails in future you won't have a massive leap. ASP.NET MVC is very similar.


Man that is weird. As a .NET developer I recently volunteered to maintain a PHP site for a not for profit museum. The site was written by a local web dev company, so without knowing better I assume it was built at least somewhat professionally.

It took me a few weeks to get the hang of it, working a few hours each evening. I enjoyed the process but now that I understand how it hangs together I fail to see the appeal. It is a fairly simple environment, without much depth or breadth. I now understand why the PHP job ads I see seem to pay a lot less than .NET jobs.


This is great until someone removes your finger so they can access your phone.


If someone is going to remove your finger to access the phone, you probably would just give them the four digit pin.

This is a way to get normal people to use better security than nothing and to give them the convenience of not having to enter an App Store password every time they install an app.

It's not meant to protect special forces operatives in the field or CIA analysts' contact lists.


Or just one digit, your finger.


I don't know about you, but I'd gladly give my password to someone today if they were threatening to cut off my finger. Unless you're seriously harder than me, what is the difference?


But with this new technology $they don't even need you alive to get to your $secret_data.


I think that this would be easily detectable by testing capacitance. The steel ring around the home button could potentially be used for that.


A dead finger is the same as a live one as far as capacitance goes unless it has been dead so long that all the moisture is gone, in which case you could just dip it in water prior to the scan.

Having said that, the idea of losing a finger to access the device doesn't really make much sense. You've got to incapacitate someone pretty well to take their finger off, so you might as well just force them to touch the phone while they are so incapacitated, unless you really like chopping off fingers.


Humans are pretty powerful conductors, are you sure that a sensitive capacitance sensor couldn't tell the difference between a finger and a finger attached to a body? I don't know it for a fact, but I'd be surprised if not.


A garden variety capacitive sensor can't tell the difference between a human finger and a hot dog.

How do you propose it would beyond attempting to measure the amount of capacitance and mapping it to an accepted band? Attempting that is way too fragile a solution due to variability in humans and local weather conditions.

And even if you did put in the effort for that, an attacker could still fairly easily match the dead finger capacitance to the correct band pretty easily.


I do hope that the one removing the finger knows that.


Capacitive sensing doesn't detect if a finger is alive or not.


That's why eye-scanner authentication makes me shiver...


There is a new SQL release, SQL Server 14.


In the UK we have Santa Pod which runs Top Fuel drag races. Its definitely an experience. I have been to a few F1 races and the top fuel dragsters are noisier.


I think its a better deterrent to just release software updates often. Of course you need to give the user a compelling reason to want to update.


In the old days (80's and early/mid 90's) where people would distribute small and simple patches to disable protections (i.e. "crack" executable files), a fast release cycle for the software thwarted the simple cracks. This situation did not last long. The crackers started using more sophisticated patching techniques, like search-string-patching and key generators.

The task of maintaining on-going disassembly across multiple release versions of some software is actually straight forward. The "dumb" (but useful) way to do it is by finger-printing all of the subroutines in the old disassembly, and then using the fingerprints to identify the similar routines in the new disassembly (IDB2PAT). The "smart" way to do it is the graph theoretic approach of Halvar Flake.

Anyone in the Anti-Virus or compatibility industries can confirm both the capacity and the need to maintain disassemblies across multiple versions of software.

Pumping out a relentless stream of new versions of your software is no longer a deterrent, and hasn't been for over a decade.


In the 80s/90s where software companies able to develop, test and distribute updates as efficiently as the warez community?


I think it's an unfair question. The creators are always at a disadvantage since the replicators always leverage and reuse the efforts of the creators.


I'm sorry I did not mean to be unfair. I was curious if the companies were able to distribute updates pre-broadband. I can remember downloading the twenty something floppies for os2 over a dialup.


At one point in time, software companies sent updates on magnetic tape through the postal mail. One of the most clever hacks I've read about was when a group doing penetration testing mailed a fake (back doored) update tape to the target.

When it comes to the efficiency of distribution, it's best to think of it in terms of the constraints and requirements.

Without a way to duplicate and distribute their products to customers, software companies could not exist, so the capacity to duplicate and the ability to distribute are both requirements.

Those very same duplication and distribution methods used by the company can also be used by others to further (re)distribute additional copies.

The difference is, the software companies are operating under the constraint of needing to make a living by selling copies of their products, so there's really no way to make a fair comparison on the efficiency of the methods used by the companies versus those people making additional copies. You're essentially comparing farmers to chefs; one produces food, while the other prepares the food.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: