Hacker Newsnew | past | comments | ask | show | jobs | submit | simon83's commentslogin

Does anyone know how the AMD consumer GPU support on Linux has been implemented? Must use something else than ROCm I assume? Because ROCm only supports the 7900 XTX on Linux[1], while on Windows[2] support is from RX 6600 and upwards.

[1]: https://rocblas.readthedocs.io/en/rocm-6.0.0/about/compatibi... [2]: https://rocblas.readthedocs.io/en/rocm-6.0.0/about/compatibi...


The newest release, 6.0.2, supports a number of other cards[1] and in general people are able to get a lot more cards to work than are officially supported. My 7900 XT worked on 6.0.0 for instance.

[1]https://rocm.docs.amd.com/projects/install-on-linux/en/lates...


I wouldn't read too much into support. It's more in terms of business/warranty/promises than what can actually do things

I've had a 6900XT since launch and this is the first I'm hearing "unsupported", having played with ROCM plenty over the years with Fedora Linux.

I think, at most, it's taken a couple key environment variables


How hard would it be for AMD just to document the levels of support of different cards the way NVIDIA does with their "compute capability" numbers ?!

I'm not sure what is worse from AMD - the ML software support they provide for their cards, or the utterly crap documentation.

How about one page documenting AMD's software stack compared to NVIDIA, one page documenting what ML frameworks support AMD cards, and another documenting "compute capability" type numbers to define the capabilities of different cards.


And almost looks like they're deliberately trying to not win any market share.

It's as if the CEO is mates with NVidias CEO and has an unwritten agreement not to try too hard to topple the applecart...

Oh wait... They're cousins!


> I can't reason as well or as quickly. Things go over my head. My working memory is so much smaller. I get lost in code all the time. I forget what I'm working on.

Is that while being high on THC, or being sober but having consumed lets say the day before?


Both. When I'm high at work it's very very acute. When I'm not high but I was high yesterday I can tell I'm lower-functioning but it's not as bad.

Weed and untreated bipolar disorder are directly responsible for me losing my jobs at SpaceX, Google, and 2 other smaller companies.


Do you still consume cannabis?


Yes


why if its cost you so much? Are you seeking treatment or is treatment something you even want? This is not an attack, genuinely interested. My sister is bipolar and she saw positives from stopping cannabis use.


Because I don't care. I'm burnt out. I have lost motivation. I used to care but I let my bipolar disorder get the best of me and there were consequences. I fell off the wagon and it stuck. I started smoking a lot in the pandemic and it compounded.

I work a dead end job now. It pays a fraction of google and is nowhere near the prestige of SpaceX. I don't care at this point.


Sorry you reached that point. You are clearly a smart person to have landed those jobs in the first place. I hope that everything works out for you, really do


Personally I care more about continuing to use marijuana than continuing to work. I've made enough money that I don't really need to work at least for the next decade and don't have anyone depending on me being responsible so the motivation to quit isn't really there.

I would guess marijuana only makes me about 10% worse at my job so it's not like I'm being fired like the other guy.


Not the GP, but for me it can be both. Depends a lot on your overall usage pattern.

Right now I only really smoke once a month at most, when the opportunity presents itself. In these cases, I'm obviously completely non-functional under the influence. But I also smoke a lot less because I have no tolerance, I feel pretty much completely fine the next day, possibly even better than usual, because the resulting night of REM suppressed sleep gives me a jolt of antidepressant effect without the downsides of sleep deprivation.

If I'm in a binging mode, it means I have a huge tolerance, smoke a ton more, and honestly it takes at 2-3 weeks of abstinence before I'm back to baseline for working memory, possibly longer.

But paradoxically, if I'm in such a binge, I'm probably more functional if I've had my morning smoke compared to skipping it for the day, because abstaining would then cause me to be highly irritable and unable to focus on anything in addition to the state of temporary dementia I'd be in. And since there's a massive tolerance, one dose won't have acute deleterious effects to the same degree.


I also don‘t think a customer facing chat bot brings much value, but an internal, employee only chat bot could be really useful, depending on the organzization of course. The company in my last position was a rather big one with an insanely huge Confluence instance. I‘ve spent (wasted) so much time searching information there. Having a chat bot, trained on all that information, would‘ve been really useful, I think.


I think the recent announcement of OverflowAI [1] could work quite well in this way for a big company like that.

FWIW the announcement reads very boring to me but I guess I was expecting something else. Likely won't be super useful in a small-medium size company.

[1] https://stackoverflow.blog/2023/07/27/announcing-overflowai/


It could be useful in a small company if you tie it in to how your software works with other software.

Even for things like support case generation for customers would be good... the customer interacts with the AI generating the ticket and gets the simple things like what you're running on and a more drilled down issue of the problem.

I get so many "I have problem, help" tickets with no information at all.


If anything, I hope Atlassian is looking into some AI capabilities for precisely this purpose. I also find their search feature lacking when dealing with a huge knowledge base, and perhaps a bot would improve things.


Atlassian will find a way to make ChatConfluence a worse experience than all of their other products.


AI chatbots could be useful at support if they can fix the code for customers and submit a pull request to the developers. No more JRA-9 issue opened for 10 years.

And same goes for OSS libraries.


As a Linux noob I decided to try Arch Linux 4 years ago because I wanted to actually learn how the various parts work that make out a whole Linux distro (things like the bootloader, systemd, various desktop environments etc). Other distros give you everything ready out of the box, which made it hard for me to actually dig deeper under the hood, because I had no clue where to start. With Arch Linux you more or less build a whole Linux system from the bottom up.

Following the Arch guides to get a fully working system was easy enough for a complete Linux noob like me. But you still need some basic knowledge about computing, for things like networking, file systems, drive partitioning and such. If you learned this knowledge from other operating systems then Arch Linux is fairly easy to setup by just following their guide.


Google: "confetti has left the cannon"

> No results found for "confetti has left the cannon".

I'm amazed that a "stochastic parrot" can come up with such a beautiful idiom.


For the curious, here was the conversation I had:

ME: What's another saying similar to: "The cat is out of the bag" or "The genie is out of the bottle"?

chatgpt: Another similar saying is "the toothpaste is out of the tube." Like the other two expressions, this phrase conveys the idea that something has been revealed or unleashed, and it cannot be undone or put back into its original state.

ME: Can you invent a new phrase that has the same meaning, which is completely new?

chatgpt: Certainly! Here's a new phrase with a similar meaning: "The confetti has left the cannon." This phrase suggests that once something is out in the open or has happened, it's impossible to reverse or contain it, much like confetti that has been shot out of a cannon during a celebration.

ME: What's the name for this kind of expression? Aphorism? Fable?

chatgpt: This kind of expression is known as an idiom. Idioms are phrases or expressions with a figurative meaning that is different from their literal meaning. They are commonly used in everyday language to convey a specific message or sentiment in a more colorful or memorable way.


I don’t understand why people aren’t more impressed with it clearly understanding and then even inventing idioms. That shows some real intelligence.


It’s because they’re confused in thinking human intelligence isn’t learned stochastic expectation.


This seems backward to me. Wouldn’t you be less impressed by ChatGPT if you thought that thought that human intelligence worked the same way as LLMs?

If humans have some special sauce different from the computer, then it’s crazy that ChatGPT can emulate human writing so well. If humans are also just statistical models, then of course you can throw a big training set at some GPUs and it’ll do the same thing. Why should we be surprised or impressed by idioms?


Well, I don’t they work exactly like the human mind, because we are likely a more complex ensemble model. But a) it’s impressive because it’s reaching a goal of computing for generations and as a computer scientist I’m excited, b) it’s corpus of learnings is the amazing collection of everything that we built with the modern internet, so it’s breadth and depth is astounding. The act of creating what we have naturally is itself exciting though.


I suppose it depends on the extent to which you're impressed by the concept of LLMs, I can see it both ways.

To explain the other way to your thinking: human intelligence is the same; holy crap they cracked robotic 'human' intelligence, it works exactly the same way.


That's how the brain evolved, after all: take incoming input, process according to previous patterns, output a reaction to peripheral nervous system that will enhance survival. The evolutionary purpose of a cortex is to store a statistical model of an animal's environment based on past stimulus. Human intelligence is just a complicated permutation of these initial systemic parameters.


That’s why people have to take courses in reasoning and logic to be able to successfully apply it. It’s not like reasoning and logic are natural human skills. They’re learned through reinforcement learning by repeatedly exposing our brain to patterns again and again until we can train our stochastic expectation juice to not hallucinate an answer.


And it takes ~20 years to train a new brain so it can coherently answer questions about a wide variety of topics. Even worse, you can't even copy-paste it once you're done!


It arguably needs much less training data though.


What we shouldn't is anthropomorphise it too much. While LLMs can express themselves and interact with us in natural language, their minds are very different from ours - they never learned by having an embodied self, and they can't continuously learn and adapt the way we do - once the conversation is over, it's like it never existed unless it's captured for a future training cycle.

Right now, their ability to learn is severely limited. And, yet, they outcompete us quite easily in a lot of different tasks.


Agreed. There are a hundred different kinds of information processing that go into a human-like mind, and we've kinda-sorta built one piece. And there are a lot of pieces that it would neither be sane nor useful to build (eg. internalized emotions), so we might not see an AI with all the pieces for a very long time ("never" is probably too much to hope for).


It's amusing that our first contact with a completely alien intelligence is with one of our own making.


How? Organisms with brains process every second of their life, is that not training data on a level comparable with current AI models?


From a pure data amount point of view yes, but relatively little of that would seem to be relevant for our intellectual capacities. If GPT was a robot moving autonomously around in the world with full visual, auditory and tactile apparatus, it may be a bit different.


Hm, not sure how most of that data would be irrelevant, could you clarify? I think all of that data as well as interacting with the environment creates the level of knowledge and intelligence we have today.


I'm definitely impressed when I see things like this. This is much more impressive than writing a 5 paragraph nonsense about a 2 sentence prompt.


Is it?

There are many mentions of confetti cannons on the web, along with explanations of how they work (saying something like confetti shoots out of the cannon). Chat-GPT just picked a random thing (confetti) and completed the pattern "X out of Y" with the thing confetti comes out of.

It's easy. The cereal is out of the box. The helium is out of the balloon. The snow is out of the globe.

And it's exactly the one thing LLMs are trained to do: reproduce patterns of words. They are (perhaps) already better than humans at that one specific skill (another win for AI research) but I don't think it's a sign of general intelligence.


Like you, I thought these pieces of software and data were little more than statistics-based text generators. But it turns out that this is a Category Mistake.

There was an argument made by Raphaël Millière in a recent Mindscape Podcast [1] with Sean Carroll that finally landed for me. He used the example that human beings are driven to eat and reproduce, so by that argument all humans are just eating and reproducing machines. "Ah! But we developed other capabilities along the way to allow us to be good at that!" And that's the point.

GPT-4, for example, is very very good at producing pleasing and useful output for a given input. It uses a simulated neural net to do that. Why would one assume that on the way toward becoming excellent at that task that a neural net wouldn't also acquire other abilities that we associate with reasoning or cognition? When we test GPT-4 for these things (like Theory of Mind) we actually find them.

"Ah hah!" you say, "Humans are set up to learn from the get go, and machines must be trained from scratch." However if you consider the entirety of our genetic legacy together with our childhoods, those are our equivalent "training" from scratch.

I don't think it can be easily dismissed that we're seeing something significant here. It's not human-level intelligence yet. Part of the reason for that is that human brains are vastly more complex than any LLM at the moment (100s of trillions of "parameters" in LLM-speak, along with other advantages). But we're seeing the emergence of something important.

[1] https://www.youtube.com/watch?v=aUJOcVPdDvg


That emergence is precisely what I'm looking for evidence of.

Human beings evolved to eat and reproduce and yet here we are, building computers and inventing complex mathematical models of language and debating whether they're intelligent.

We're so far from the environment we evolved to solve that we've clearly demonstrated the ability to adapt.

ChatGPT doing well at a language task isn't demonstrating that same ability to adapt because that's the task it was designed and trained to do. ChatGPT doing something completely different would be the impressive example.

In short: I don't categorically reject the possibility that LLMs might become capable of more than being "statistics-based text generators", I simply require evidence.


This paper from late last year shows that LLMs are not "just" stochastic parrots, but they actually build an internal model of the "world" that is not programmed in, just from trying to predict the next token.

https://ar5iv.labs.arxiv.org/html/2210.13382

PS: More research has been done since that confirmed and strengthened the conclusion.


Another poster shared a link to this paper last week for Theory of Mind: https://arxiv.org/abs/2302.02083

We're seeing those other capabilities emerge; like being able to play chess though it's not been trained to do so. That is, these LLMs are displaying emergent abilities associated with reasoning.

These LLMs aren't R. Daneel Olivaw or R2D2 (which is what I think of when I think of the original term for AI, and what we took to calling AGI). We're closer to seeing the just-the-facts AIs we encounter in Blindsight. Intelligence without awareness.

Funny that we still have to use science fiction to make our comparisons because our philosophy of intelligence, mind, and consciousness are insufficient to speak on the matter clearly.


The issue I see here is you are doing a worse job at this than ChatGTP. Creating idioms is hard, that is why we left most of them to Shakespeare.

- I regularly return cereal to its box.

- "helium" and "balloon" have a more awkward rhythm than "confetti" and "cannon". It also loses the connotations of sudden, explosive and exciting change.

- Snow & globe I'm not even sure what that means in practice. It has poor prospects as an idiom. Is the snow even known for leaving globes?


> "helium" and "balloon" have a more awkward rhythm than "confetti" and "cannon". It also loses the connotations of sudden, explosive and exciting change.

Not only that, but "the confetti has left the cannon" is an alliteration, which makes the phrase even more poetic.


I actually agree, it's a very good phrase.

But I do think it's cherry picking the most impressive example. I repeated the dialog (and some variations), each time asking for a completely new idiom, and ChatGPT responded with several phrases that aren't new at all:

"The toothpaste is out of the tube"

"The lid has been lifted"

"The secret has been spilled"

"The arrow has left the bow"


> But I do think it's cherry picking the most impressive example.

Yes, but don't we cherry pick from what humans have said, too? I'm sure there have been many dumb and obvious proverbs that didn't survive.

> ChatGPT responded with several phrases that aren't new at all

Are you using the GPT-4 version of ChatGPT? That's what GP used.


it also attests to the ramifications, the magnificent difficulty of cleaning confetti, you will keep occasionally finding confetti around the house for months or years


To continue with some human brain powered pedantry for fun I’d argue that “the snow is out of the globe” is not a valid idiom to suit the “X out of Y” pattern.

We have multiple example pairings of X and Y but the common components are that putting X back in Y would be impossible or extremely difficult, and also that X is in some way meant to no longer be contained inside Y as part of either a desired outcome, it’s normal function or the natural expected and thus inevitable result. Cats want to escape, helium leaks, confetti is expelled to have the desired effect, and toothpaste is squeezed out to use it…

For the snow to come out of a snow globe you have to smash it which is not normal usage, not normally meant to happen, and shouldn’t happen by itself. Making the idiom “The snow is out of the globe” not a proper member of this “family” of idioms. (Also I’m not sure if there’s an agreed upon collective noun for idioms)


"The AI effect occurs when onlookers discount the behavior of an artificial intelligence program by arguing that it is not real intelligence."

https://en.wikipedia.org/wiki/AI_effect


Better that than the opposite effect, to assume that because a system solves a single problem very well, it is intelligent.

Is Stockfish intelligent?

Is a system with A* pathfinding intelligent?

I would define intelligence as the ability to solve a wide variety of novel problems. A system built to be excellent at a single task may be better than humans at that task but still lack "intelligence".

We still don't know what that even is exactly, but historically people consistently underestimate how difficult it is.

Not even knowing how to approach it, researchers work on solving single specialized problems instead and make little progress on whatever "intelligence" is.

(If you'd prefer a different definition of intelligence under which Stockfish and GPT are intelligent, then what would you call the ability to solve a wide variety of novel problems? Feel free to substitute that word for "intelligence" above if you'd like to understand what I'm saying.)


> Is Stockfish intelligent?

> Is a system with A pathfinding intelligent?*

I'm not sure if we should get stuck on definitions of intelligence.

The fact is that these tools are useful, as are the currently existing AI's. The latter can also pass for humans, in many ways, while the algorithms you mentioned can only pass for humans in very narrow domains. Both can exceed human performance in some ways.

Eventually, AI's may be indistinguishable from human or convince humans that they should be treated differently from "mere" programs and algorithms, and at that point we will have entered a new era, call it what you will.


We'll have to "get stuck on" definitions of intelligence if we want to talk precisely about what LLMs are capable of.


Not necessarily, as we can just evaluate them on their performance as we give them ever greater challenges.

To do this we do not need to consider whether they're intelligent at all.


In a way, this comment perfectly encapsulates why the argument "machines will never replicate human behavior" is so ridiculous. Instead of engaging with the discussion and topic, you chose a position, and then tried to justify it without really thinking about why one example works and the other one doesn't. In doing so you're literally showing that for certain topics, machines are already more capable than some humans.


I didn't say "machines will never replicate human behavior" so I don't think you're engaging with what I said.


Sure, you didn't say that. You merely said:

> And it's exactly the one thing LLMs are trained to do: reproduce patterns of words. They are (perhaps) already better than humans at that one specific skill (another win for AI research) but I don't think it's a sign of general intelligence.

while completely missing why the machine did a better job.


Is it so impressive that the “fart left the butt” ? :)


That’s brilliant. Though personally I prefer "The SARS-COV-2 virus has left the lab"


Not exactly sure why it would be surprising that it can come up with a convincing idiom when it can produce remarkably good _poetry_


I think the thrilling part is that it's a somewhat atomic concept that can somewhat convincingly be proven to not exist in the training data. While poetry is more impressive if it's as original it's harder to show that it's not just stitched together from the training data.


I just asked GPT-4 to come up with more such "provably original idioms":

""" Here are a few more examples of idioms with meaningful and provable atomic originality:

"The kite has touched the stars" - This phrase could mean that someone has achieved a seemingly impossible goal or reached a level of success that was thought to be unattainable.

"The paint has mingled on the canvas" - This idiom might convey the idea that once certain decisions are made or actions taken, the resulting outcome can't be easily separated or undone, similar to colors of paint that have blended together on a canvas.

"The clock has chimed in reverse" - This expression could be used to describe a situation where something unexpected and unusual has occurred, akin to the unlikely event of a clock chiming in reverse order.

"The flower has danced in the wind" - This phrase could signify that someone or something has gracefully and nimbly adapted to changing circumstances, just as a flower might sway and move in response to the wind. """


Yeah but can it beatbox?


That is impressive, especially given that neither Google nor DuckDuckGo find any previous trace of that idiom.


did you search in different languages?


Conspiracy theory: this idiom used to be widespread on the internet, but a shadowy cabal of AI maximalists erased millions of such expressions from search engines so GPT could appear to re-invent them.

It is a good expression though -- evocative but not gross or violent. You could imagine many less successful analogies to something ejecting something else.


That is not a conspiracy theory... it is alternate reality fan fiction.

While making up "what ifs" can be fun, it doesn't merit either of the words "conspiracy" or "theory".


Not exactly the same, but from:

https://www.instagram.com/p/CQdBiVyh5C2/?hl=en

Now that the cat is out of the bag, or, should I say the confetti is out of the… can?


Is it strange that I think it's more likely that LLMs generalize and synthesize new things out of old patterns instead of just copying a singular example.


It’s plausible that the above quote made it a closer association for GPT to come up with.


Could be lifted from another language. Is nice though.


Now Google has one search result pointing to your comment :)


Out of distribution generations are a thing.


I understand that all of this is based on some fundamental mathematics, a couple of algorithms here, and some statistical analysis there. And I find it absolutely amazing that we can use all of that knowledge and encode it into something that resembles intelligence. This makes me think if our brains and the concept of intelligence are really as special and mysterious as we assume.


Try looking up "pinkie pie party cannon"


That name aged like milk. First of all, what you said. And second - a parrot can make more parrots without human help, language models can't make GPU chips. Insulting for both LLMs and parrots.


Language models exist not in the physical world but digital. A more apt comparison would be if LLMs can compile new (or slightly modified) LLMs, and I believe the answer will soon be (if not already), a resounding yes.


I agree, LLMs can generate the source code for their network, can generate the training data, supervise a model being trained and explain step by step what they do. That's why I specifically said they can't make GPUs, if they could, they would be self replicators. We'd have to consider them alive.


> language models can't make GPU chips

Not yet. I'm sure somebody is taking a hard look already.


I would assume some of them can write Verilog code


Well, I clearly had to try it even though I don't know my verilog from my veri-println. Can someone who does know, rank this?

--

Prompt: verilog code for 4 bit binary adder

[I'm skipping the English description of the code, this is just the code part of free ChatGPT's response, no cherry picking]

  module binary_adder(A, B, Cin, S, Cout);
    input [3:0] A, B;
    input Cin;
    output [3:0] S;
    output Cout;
    
    wire C0, C1, C2;
    
    // Full adder for bit 0
    full_adder FA0(A[0], B[0], Cin, S[0], C0);
    
    // Full adder for bit 1
    full_adder FA1(A[1], B[1], C0, S[1], C1);
    
    // Full adder for bit 2
    full_adder FA2(A[2], B[2], C1, S[2], C2);
    
    // Full adder for bit 3
    full_adder FA3(A[3], B[3], C2, S[3], Cout);
    
  endmodule
  
  module full_adder(A, B, Cin, S, Cout);
    input A, B, Cin;
    output S, Cout;
    
    assign S = A ^ B ^ Cin;
    assign Cout = (A & B) | (A & Cin) | (B & Cin);
    
  endmodule


It seems correct, but verbose. GPT-4 gives the more concise and idiomatic:

  module full_adder(input [3:0] a, input [3:0] b, input cin, output reg [3:0] sum, output reg cout);

  always @(*) begin
      {cout, sum} = a + b + cin;
  end

  endmodule


We found that Codex was pretty bad at Verilog and so we fine-tuned our own models on Verilog code, starting from the SalesForce CodeGen model. The paper will be at DATE: https://arxiv.org/abs/2212.11140

The largest model we trained is here: https://huggingface.co/shailja/fine-tuned-codegen-16B-Verilo...


> maintain security? built-in firewall and DDoS protection?

I have a Hetzner dedicated server (not the Cloud offering) and I setup OpnSense as an all-in-one routing and firewall solution in a separate VM. All incoming and outgoing traffic goes through this OpnSense VM, which acts as default gateway for the host system and all other VMs/Docker containers. You either need to book a 2nd public IPv4 address (or just use IPv6 for free if that is good enough for your use case, since each server comes with a IPv6 /64 subnet), or if you want to just have 1 IPv4 address you could do some Mac spoofing on the main eth interface of the host OS and give the actual Mac address and public IP to the OpnSense's WAN interface. This is necessary because Hetzner has some Mac address filtering in place, meaning only the Mac address connected to the public IP is allowed to make traffic.


You, like many others, are basing your opinion on the current state of LLMs/GPT. And I totally agree with you, the current GTP-4 version might not replace programmers. But how about future iterations? Personally I can't fathom what the next few years will bring us in this area, especially considering the jump GPT 3.5 to GPT 4 has made in such a short time frame. I'm almost convinced that it will make a lot of jobs obsolete in the future, including some programmer jobs, but I'm not bold enough and make a prediction when this will happen, be it in 2, 4, 10 or 20 years.


Oh wow and I thought I'm going crazy, because I also make so many typos when typing on my iPhone 11 (normal size)! On any other Android phone I don't make nearly as many typos, and it just feels so much better. I never understood why, maybe the "hitbox" on the iOS keyboard keys is just smaller? Because I could swear I'm tapping the right key but somehow it often triggers a neighboring key.

Also agree with the horrible cursor movement and text selection, it is wack af on iOS


I was a huge React fan myself but when I started to use Vue and its Composition API one year ago I had tears of joy in my eyes. You have a setup function that is executed exactly once. Inside this setup function you setup your life cycle callbacks, reactive state variables and so on. This is how I wish React + Hooks should have been. Nowadays I won't even accept projects that use React anymore, exactly because of all the pain points OP mentioned.

SolidJS is following the same principle like Vue + Composition API. I think Svelte is also in the same boat. This kind of pattern is so much easier to use and reason about. React Hooks become really painful and annoying really fast.


I've been using React since around 2016 and when Hooks were introduced I played around with it but didn't like it at all exactly because of the reasons mentioned in this article. Sure it reduced quite a bit of boilerplate code that came with class based components and enabled better code re-usability outside of class inheritance or mixins but the disadvantages were too big for me personally to really consider using Hooks in a "greenfield" setting.

In my current project I'm using Vue with Composition API which gave me the exact same "aha" moment the author of this article had with SolidJS. Vue + Composition API is way more similar to SolidJS in principle than React + Hooks.


Pure function components in React with MobX for state management rather than the hooks system has worked out quite well for me.

I do, however, keep finding myself wondering about jumping to Vue and VueX instead.

Ask me in five years, I guess.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: