Bogotá is 4 degrees north of the equator. Its climate is a bit temperate due to being in a high altitude plateau. But you better wear sunscreen anyway under tropical sun.
Notice that it's still very much possible to produce SWF files with languages like Haxe http://haxe.org/, and there are frameworks that mimic the Flash drawing API like OpenFL https://www.openfl.org/, there is (or was) a lot of interesting stuff like that happening around.
Indeed, Flash UI is really its strenght, the way to draw and manipulate curves, I don't think I've seen anything like it after that, although illustrating is not my trade. However, it is possible to do cool procedurally generated stuff with the drawing API, or use plain normal bitmap graphics to do things.
I far prefer the pen tool in Macromedia Freehand/MX, to say nothing of the other drawing modes which it offered (and which Adobe later copied).
I might still have an InDesign Subscription if Adobe had just rolled all of Freehand's capabilities into it --- instead, I keep a Windows computer for it and a stylus (despite Windows having crippled stylus functionality in Windows 10 Fall Creators Update) --- which reminds me, stylus usage in Waterfox broke again and I have to look up how to fix it (again).
No, it cannot. It can sort of compile some animations (with the libary EaselJS), but you have to use javascript instead of actionscript - but it is really not the same like it was in flash. Basically it does not work for me and I abandoned Adobe Animate and still looking for replacement of the lost Garden of Flash Utopia.
Flash required a browser plugin to work. It was handling video and 3D animation a decade before the <video> and <canvas> elements were added to the HTML5 spec.
I had a start programming and doing little weird animations back in the early 2000s. Then I could earn a living doing stuff with Actionscript, little games on the web, or profile picture generators; even stuff on the BlackBerry PlayBook, which had support for AIR runtime. I made games with Flash and Actionscript until ~2015. Newgrounds even holds a jam called Flash Forward, in which you submit Flash games https://www.newgrounds.com/collection/flash-forward
I stopped using Flash long before it became Animate. I'm really sad to see it go, and that Adobe has so little love to this important piece of the web and the Internet.
I made a game with Adventure Game Studio about 20 years ago, and Windows being Windows, it still runs fine on modern Windows, although I lost the source to it.
AGS is still a good engine for making games! AdventureX (https://www.adventurexpo.org/) is an annual convention held in London, UK (next one is 7-8th Nov this year), and lots of AGS users gather here. Some people are making games, while others are making modules for integrating libraries or new platforms (for handheld consoles, for example).
BTW, I hadn't realised they modernised their website. Until very recently they still had their version from 2006, looked quite nostalgic these days.
Raylib's author was very happy to announce that you can compile an entire raylib program with no dependencies other than, say, being a win32 app https://x.com/raysan5/status/1980322289527976202
Oh that sounds really interesting. I was searching for something like that to render something on a fun-hacky-ledscreen with an embedded processor. All things I found weren't satisfying.
But if I understand that correctly, I should be able to just compile this and then software render stuff? For my tiny 192x128 pixels this should be fast enough on any kinda system. Time for fun animations
The mantra for the library is "raylib is a simple and easy-to-use library to enjoy videogames programming." It's for hobbyist, learners, tinkerers, or just those that want to enjoy minimalistic graphics programming without having to deal with interfacing with modern machines yourself.
The default Windows installer bundles the compiler and a text editor to make poking at C to get graphics on the screen (accelerated or not) a 1 step process. Raylib is also extremely cross platform, has bindings in about every language, and has extra (also header only, 0 dependency) optional libraries for many adjacent things like audio or text rendering.
When I first started to learn C/C++ in the 2000s I spent more time fighting the IDE/Windows/GCC or getting SDL/SFML to compile than I did actually playing with code - and then it all fell apart when I tried to get it working on both Linux and Windows so I said fuck it and ignored that kind of programming for many years. Raylib is about going the opposite direction - start poking at C code (or whatever binding) and having it break and worry about the environment later when you're ready for something else.
I never ever bothered to compile SDL/SFML from source, what is so hard dumping the binaries into a folder, set the include paths for the compiler and linker?
Although I may imagine newbies may face some challenges dealing with compiler flags.
Not much to a developer. To a novice (potentially very young) user there's confusion why there are 3 versions for e.g. Windows, which to pick from and why, how to set the compiler/linker flags for the build tuple, and then confusion about how to make it work on the alternative targets for their friends (e.g. the web target or the Linux ARM Pi target for class) and why that has to be different steps. None of that is particularly complicated once you go through it all, but it is a bit of a barrier to the "see something on the screen" magic happening to drive interest. Instead, raylib is just a header file include, like a text based "hello world", regardless of platform - even if you don't want to use the pre-made bundle.
Or, more simply, it makes the process "easy as a scripting language" rather than "pretty easy".
> what is so hard dumping the binaries into a folder, set the include paths for the compiler and linker?
The problems already start with getting the precompiled libraries from a trusted source. As far as I'm aware the SDL project doesn't host binaries for instance.
This site is for hackers, which basically means people who like to do things like this. If you can't understand why someone would be interested in this, probably you should remain silent and try to understand hackers rather than commenting.
As someone who was once a child trying to figure out how to compile and link things to use SDL, I think there's some educational value in letting people make games without having to dive deep into how to use C++ toolchains.
I think there's still value in learning the C++ language and making a game or a demo is quite rewarding although raylib does have bindings for basically every conceivable language.
I'd make the opposite argument about educational value.
If you learn to compile libraries and programs you have, so to speak, passed an exam: you are ready to "make games" with confidence because you know what you are doing well enough to have no fear of tool complications.
What should be minimized is the accidental complication of compiling libraries and programs, for example convoluted build systems and C++ modules.
If you learn to compile libraries and programs, you just learn how to compile libraries and programs. That doesn't teach you anything about how to "make games." It doesn't even make game development significantly easier.
I think the real answer to educating people about making games without the complications of low level programming would be using a framework like Godot or languages like Python or Lua.
Of course technical concerns aren't directly relevant to making games, but they are still necessary. Productive development means overcoming technical hurdles, not only domain specific challenges.
What if you cannot adopt some library that would do something very useful because you lack the skill integrate or replace CMake or Bazel or Autoconf? Unnecessary technical constraints impact game quality.
What if due to insufficient automation the time between tests after making a very small change is 10 minutes rather than 10 seconds? Reduced productivity impacts game quality.
Very useful skills to have. But they don’t need to be learned during the very first lesson on the very first day unless you are trying to filter people out for some reason.
As someone working on a game engine with a multithreaded SSE/NEON implementation of ~GL 1.3 under the hood, this is rad for many reasons other than portability or compatibility. You get full access to every pixel and vertex on the screen at any point in the rendering pipeline. This allows for any number of cool (also likely multithreaded) postprocessing effects that you don't have to shoehorn through a brittle, possibly single-platform shading language and API.
It's slower indeed, but it's easier to write and debug, more portable, and gives you total control over the render pipeline. It's good for experimentation and learning, and would still be trivial to compile and run 20 or 50 years from now. And with how obscenely fast modern CPUs are, it might even be fast-enough for a simple 3D game.
Someone always points out how Doom wasn't "real 3D" like it's some sort of gotcha. Games are smoke and mirrors, it's all a 2D grid of pixels at the end.
Well yeah, a 2d grid of pixels also describes the result of rendering a 2d game. It matters how you arrive at that 2d grid of pixels, that's why you can't render Crysis on just a CPU. At least not in real time.
In the last months I've been making a songbook with chordpro https://www.chordpro.org/, amazing CLI program that produces a PDF from text files.
I've been working on my own arrangements, putting chords in lyrics, and the program produces a page with the chord diagrams next to each song. ChordPro is a program that descends from a long lineage of programs that do this, but it's been actively under development in the last 3-4 years. The developer is quite nice, and attends bug reports.
reply