Hacker Newsnew | past | comments | ask | show | jobs | submit | Ayeceedee's commentslogin

Ah! Small note. There are methods of compression and encoding that allow for scalability. There's some fancy signal processing you can do to encode multiple <framerates / resolutions / compression qualities> into a single bitstream without necessarily storing redundant data.

But, I'm not in industry, and the last time I poked my head in I recall things being way uglier and more complicated than I had imagined. There was a good conference talk that was linked here once but I've lost it. Talked about the sort of awful, buggy things (formatting/file wise) that people try to upload that break everything.


> Small note. There are methods of compression and encoding that allow for scalability. There's some fancy signal processing you can do to encode multiple <framerates / resolutions / compression qualities> into a single bitstream without necessarily storing redundant data.

A big note. All of this wasn't used by YT because they've always used a widely used standard codecs and media containers. All the lower qualities of videos they provide use storage nearly the same as the highest one (so 2x total), all bitrates are the same for all videos. As for download part, the players in auto quality would get first a maximum quality they can for current download speed, if there's a room or connection isn't stable - also 480p version, and if the speed was enough to download current and next chunk in lower quality it would download the best and switch to it over on completion (there were a times when player was downloading the whole video in the best quality after it had finished in an optimal).

> the sort of awful, buggy things (formatting/file wise) that people try to upload that break everything.

YT has very narrow list of formats allowed to upload with hard breaks in converting process, especially for the audio.


Interesting, thanks! I'd love to read more about ways to encode video that allows streaming at a variety of resolutions with a minimum of redundancy, if anyone has any links.


For me, it came up in a course I took last term which used the textbook "Video Processing and Communications" by Y. Wang, J.Ostermann and Y-Q. Zhang.

It's a bit out of date (2002), but the core video encoder material was solid. It's a bit heavy in representing concepts using symbolic/equation notation. That, for me, made the jump from 1D signal processing to multidimensional signal processing tougher than it needed to be.

There are probably better resources so definitely don't just take my word for it! :)

(For scabilility, IIRC the terms "enhancement layer" and "base layer" are particularly important, as are the block diagrams that generate them.)


Why not post these observations instead, then? I think what you've written here is much more constructive than what was written above.


I'm someone who just took a computer vision course. I have other priorities (namely finals in other courses), so I haven't had much opportunity to repeatedly practice some of the more basic tasks in opencv. I wouldn't necessarily want to pour through my old assignment code to jump-start my memory on these tasks -- skimming through this now, I love how it's presented, and can see myself returning to this.

What you've linked targets a different use case. I don't need to relearn the core concepts through tutorials, and I don't need every possible config option from the docs. I just need simple, easy-to-parse demonstrations of basic use cases. This works for me.

>And then it hit me, this is just another TED talk style Medium publishing influencer.

"This" is a bit of an uncharitable way of framing that person, don't you think? Behind those blanket labels is an actual human with feelings and motivations that you and I don't actually know. Handwaving away a person's efforts like this is unnecessarily dismissive, imo. It smacks of "I don't like the motivations I've assumed from you, therefore your output doesn't matter." How would you feel it you were on the other end of that?


+1. I'm at the same noob stage and appreciate easy-to-digest courses. I, for one, am appreciative of the work.


We weren't given many constraints related to the chess piece recognition itself. The course instead asks us to implement a CV research paper, and we chose an existing research project which focused on chess piece recognition.

That lack of constraints led us into running face first into issues of generalisation and variability within datasets. As in, exactly what you allude to with limiting the piece sets.

I think in my undergraduate naivety my aspirations were too high with what could reasonably be accomplished. I've spent a lot of time trying to improve an aspect of the project that really didn't need to be improved, which prevented meaningful progress.

Now finals are coming up and I feel terribly stressed. Having trouble functioning. Brain fog, etc. I feel so sad right now.

EDIT: I keep forgetting my password so apparently I have multiple throwaway now. Sorry.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: