Anthropic marketing (and even supposedly technical write ups) sadly has become more hyperbole and less substance over time imo. This technology is so impressive on its own, really feels like shootings themselves in the foot in the long run, but what do I know
Case in point here where they conveniently fail to report the false positive rate, while also saying that if it wasn’t for Address Sanitizer discarding all the false positives this system would have been next to useless
Right now, we accept false positives as long as you can sort them out. I think it's pretty typical that >99% of fuzzer runs don't result in new coverage. Of course they're far from useless without feedback but it's better to have it if you can. I guess the question is does the llm approach have lower costs for validation and triaging vs just fuzzing alone, unclear to me. Anthropic would like people to believe automation is this scary new unknown
But on the other hand the claude app is garbage… https://github.com/anthropics/claude-code/issues/22543
obviously native apps can be garbage too, but I must say electron apps have a surprisingly high incidence of terrible performance issues, unsure if it’s a correlation or causation issue
LangGraph implements a variant of the Pregel/BSP algorithm for orchestrating workflows with cycles (ie. not DAGs) and parallelism without data races. You can design your graph as a state machine if you so desire
Hi, you don't have to use langchain tools or ToolNode with langgraph, you can absolutely write your custom tool handling logic. In langgraph nodes are just functions that can do whatever you want, seems to be some misconception somewhere?
That's true, you could not use the ToolNode object and define your own logic to handle tools. You do need to use at least the @tool decorator in order to attach those functions to a model using LangGraph's bind_tools function.
Defining some of the functions as tools was where some of the "self" parameter issues mentioned in the TDS write up came up - since those parameters aren't added by the LLM, but LangGraph/Pydantic errors if there missing.
Technically you could forgo using the ToolNode object, not define your functions as tools, not use the bind_tools method, and potentially not use Langchain's model abstractions - but at that point, you're not using much of the framework at all.
FWIW we have first class support for subgraphs in the library, would love to know what issues you faced there. Support for subgraphs in the studio is coming soon.
Thanks, would using a topological sort or other simpler algos, not sufficed processing the graph with cycles directly, may not be as optimised as Pregel, but would have been simpler.
Pregel actually doesn't require the structure of the graph to be declared ahead of time, the next set of tasks (functions) to run is computed at the beginning of each step, so in theory the graph topology could be entirely dynamic. Langgraph supports dynamic topologies too, ie. don't declare any hard edges, add a conditional edge after every node
The value of using pregel is supporting cyclical graphs, with concurrent tasks, but completely deterministic behavior (eg. doesn't matter that task A this time happens to finish before task B, its results will be applied in a predetermined order, etc)
Case in point here where they conveniently fail to report the false positive rate, while also saying that if it wasn’t for Address Sanitizer discarding all the false positives this system would have been next to useless