There are hardware and software labs, which are administered on paper by PhD students. These include(d): ML (the functional programming language), FPGA/soft core development, Java tasks, breadboarding some logic, prolog and probably some different ones now (looks like some machine learning tasks?). Some of them are referenced and described on the links above. There's also a group project in year 2, a dissertation individual project in year 3, and a small holiday project between 1 and 2. Overall, a few students get through it without being able to properly program, but most basically self teach.
> And as a kid, you certainly can't say a classic author is not interesting. You can't say the text is boring, that you don't see talent in it, that you didn't learn anything from it. It has been validated by society, hence it's good.
That is because that statement is both not useful in the context (you're there to pick it apart) and reflects pretty badly on your understanding, clearly the text has some depth, even if not consciously included, to be analysed. If your conclusion was "rubbish" when you're meant to be making a point about subtext, you're failing, its pretty simple.
People include subtlety in their art even if they don't intend it. Things can not be fantastic but still reflect society, the author, your own experience, which is the point of literature analysis.
Text analysis is a lot like wine tasting: there is something to it, but it's way over the top. And if you put a brand new text and put 10 experts on it, they will come up with different interpretations. They will even claim terrible wine is good because of the bottle.
There is also a huge mentality implication. See for example your reaction: you assume my understanding is bad while knowing nothing about me.
And I just criticized people drawing definitive conclusions about other people so distant we know little about them. The example in the article supports this and beyond, and while a few data point is not evidence, it calls for a debate.
I think it's perfectly ok for kids to be wrong about their text interpretation if they produce a personal constructed analysis. First because it's pretty hard to prove there is only one right analysis and you got it. Second because the process is as important as the result. Good teachers target that, but few do.
There's wine tasting and than there's blind wine tasting. The latter has some things going for it as a proper endeavour. The former is just a pastime that's more about finding nice words than actually tasting wine.
Except your comment was about refusing to engage in the activity, analysis of the themes and subtext.
The whole point is that of a subjective analysis, with infinite interpretations which aren't objective views of the text, but a product of the interrelationship between: the text, the context of the text's writing (inc authorial intent), the analyser, their context, and the context of the academia around the text pre existing. These aren't objective measures, but the aim is to have something to say, to have enough insight into the world to link ideas up and make something up that sounds convincing.
Saying the book isn't as good as other people said it was isn't that, and it isn't really very useful, even as a personal opinion, and is completely missing the point of the exercise.
Surely the time complexity of the abacus sort is O(ksqrt(n)+cn)? What with the time taken for the beads to fall being proportional to the square root of the distance under freefall and proportional to n itself when at terminal velocity? (so strictly O(n)). Radix sort on a known range of integers has some similar-ish properties.
I was going to comment on this already, but even that's way too "fast" based on the accepted notion of time complexity in theoretical CS... abacus sort should technically be an exponential-time algorithm! (And yes, I disagree with the wikipedia article on this topic, which lists several "possible" time complexities, all of which I consider incorrect.)
The reason being that the accepted definition of time complexity in formal literature is standardized not based on the number of "things" in the input (numbers in a list, nodes in a graph, etc.), but on the length of the input string. This is a common pitfall when people reason about time complexity. Here's why this applies to abacus sort:
1. Abacus sort contains integers in its input string--and based on the conventions of formal time complexity, these can not be encoded in unary (if unary encodings were allowed, then several famous NP-Complete algorithms, e.g. bin packing, would have polynomial time solutions based on the length of their input)
2. The value of an integer is exponentially related to its length. For example, making a binary integer only one bit longer can double its value, making it two bits longer will quadruple its value, etc.
3. The input to a sorting algorithm like abacus sort is a list of numbers. Even if there are n numbers, the number of "beads" you need to simulate is exponentially proportional to the length of the longest single number.
4. Even if you think that the number of beads can be simulated with an infinite number of threads, or something, you'd still need to decode the (at least) binary input to a unary number of beads, which will take an exponentially-related amount of memory.
Of course, all this is not to say the algorithm isn't cool in practice, for numbers in ranges that reasonable people care about. It's similar to a weakly-NP-complete problem (https://en.wikipedia.org/wiki/Weak_NP-completeness) like integer bin packing, where the time complexity is dependent on the longest number and therefore technically exponential, but in practice solved for any reasonable ranges of values in polynomial time.
Edit: By the way, this is why the naive prime-number-checking algorithm (check divisibility of a value v by all values up to sqrt(v) is not an efficient algorithm even though many people incorrectly think it's O(sqrt(n))!
It technically takes exponential time relative to the length of bits in the number, so it's basically useless for guaranteeing primality of something like a 4096-bit number.
> 3. The input to a sorting algorithm like abacus sort is a list of numbers. Even if there are n numbers, the number of "beads" you need to simulate is exponentially proportional to the length of the longest single number.
I think this is where we'd mismatch: I'd assume baked into the problem space is that you have N numbers to sort, they are each of size K where K is fixed and bounded. As a solution to "sort these N 16-bit ints, for some arbitrary N". It would be same problem space that can be solved in O(n) with bucket/counting sorts.
This is true, but note that the problem in general doesn't specify that the ints are only 16-bit. It's already well established that "linear time" sorting algorithms exist (e.g. radix sort) for numbers that are all a bounded size. However, the key is still that the length of each number must be bounded, because that ensures that the length of the input relates to the number of numbers in the list to be sorted, rather than to the length of the largest number. (The Wikipedia article on radix sort gets this right... Just not the one on abacus sort)
Yeah, it's O(N) in air and O(sqrt(N)) in a vacuum. The O(1) thing assumes all the beads move simultaneously in the same time unit, which isn't possible in practice of course.
Have you tried alternative test site meters? I test on my forearm and it doesn't hurt and mostly takes a couple of quick lanceting. I use Freestyle lite test meter with a multiclix lancet. No finger damage, no mess, just have to have forearms visible. Easier for your wife to do too since lancet placement is trivial (anywhere on forearms). Hope the CGM works for you ofc