FLOSS Project Planets

New RYF Web site: It's now easier to support companies selling devices that Respect Your Freedom

FSF Blogs - Thu, 2019-11-07 14:14

The Respects Your Freedom (RYF) certification program helps to connect users with retailers who respect their rights. Retailers in the program sell devices that come with freedom inside, and promise to always ensure that their users are not directed to proprietary software at any point in the sale or ownership of the device. When we launched the program in 2010, we had no idea how quickly the program would grow.

In 2012, when we announced the first certification, we hosted information about the program and retailers as a simple page on the Free Software Foundation (FSF) Web site. With only one retailer selling one device, this was certainly satisfactory. As the program grew, we added each new device chronologically to that page, highlighting the newest certifications. We are now in a place where eight different retailers have gained nearly fifty certifications, including the recently announced Talos II and Talos II Lite mainboards from Raptor Computing Systems, LLC. With so many devices available, across so many different device categories, it was getting more difficult for users to find what they were looking for in just a plain chronological list.

Thus we are proud to announce we're launching a new, stand-alone Web presence for RYF, capable of facilitating its continued expansion. Users can check out the new site at https://ryf.fsf.org. There, they can browse certifications by vendor and device type, and learn about the most recent certifications. Each device has its own page which directs users to the certification announcement, date of certification, and a link to the retailer site where they can purchase it.

We hope that this update will make it even easier for users to find products they can trust from retailers dedicated to promoting freedom and privacy for everyone. With that said, there is always room for improvement, so we would love to hear your feedback about the new site. Here's what you can do to help:

  • Check out the new site at https://ryf.fsf.org and let us know what you think by sending an email to licensing@fsf.org.

  • Help spread awareness of the RYF program by sharing our RYF flyer with your friends and colleagues.

  • Buy certified products and encourage others to do so.

  • Encourage a retailer to certify a product by directing them to the RYF criteria page.

  • Support this work by donating or joining the FSF as an associate member.

Categories: FLOSS Project Planets

FSF News: Talos II Mainboard and Talos II Lite Mainboard now FSF-certified to Respect Your Freedom

GNU Planet! - Thu, 2019-11-07 14:13

BOSTON, Massachusetts, USA -- Thursday, November 7th, 2019 -- The Free Software Foundation (FSF) today awarded Respects Your Freedom (RYF) certification to the Talos II and Talos II Lite mainboards from Raptor Computing Systems, LLC. The RYF certification mark means that these products meet the FSF's standards in regard to users' freedom, control over the product, and privacy.

While these are the first devices from Raptor Computing Systems to receive RYF certification, the FSF has supported their work since 2015, starting with the original Talos crowdfunding effort. Raptor Computing Systems has worked very hard to protect the rights of users.

"From our very first products through our latest offerings, we have always placed a strong emphasis on returning control of computing to the owner of computing devices -- not retaining it for the vendor or the vendor's partners. We hope that with the addition of our modern, powerful, owner-controlled systems to the RYF family, we will help spur on industry adoption of a similar stance from the many silicon vendors required to support modern computing," said Timothy Pearson, Chief Technology Officer, Raptor Computing Systems, LLC.

These two mainboards are the first PowerPC devices to receive certification. Several GNU/Linux distributions endorsed by the FSF are currently working towards offering support for PowerPC platform.

"These certifications represent a new era for the RYF program. Raptor's new boards were designed to respect our rights, and will open up new possibilities for free software users everywhere," said the FSF's executive director, John Sullivan.

The Talos II and Talos II Lite also represent an interesting first in terms of reproducible builds. When two people compile the same code, the resulting object code usually differs slightly because of variables like build timestamps and other differences affecting the object code. Making it so users can independently reproduce exactly the same builds for important free software programs makes it so that anyone can distribute the builds with more certainty that they do not contain hidden malware. For the Talos II, the FSF was able to reproduce the build that is loaded onto the FPGA chip of the board that was tested, and will include the checksum of that build along with the source code we publish.

"We want to congratulate Raptor Engineering on this, and we encourage vendors to ship more reproducible builds, which we will be happy to reproduce as part of the RYF certification," said the FSF's senior system administrator, Ian Kelling.

To learn more about the Respects Your Freedom certification program, including details on the certification of these Raptor Computing Systems devices, please visit https://ryf.fsf.org.

Retailers interested in applying for certification can consult https://ryf.fsf.org/about/criteria.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at https://fsf.org and https://gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. Its headquarters are in Boston, MA, USA.

More information about the FSF, as well as important information for journalists and publishers, is at https://www.fsf.org/press.

About Raptor Computing Systems, LLC

Raptor Computing Systems, LLC is focused on developing and marketing user-controlled devices.

Media Contacts

Donald Robertson, III
Licensing and Compliance Manager
Free Software Foundation
+1 (617) 542 5942
licensing@fsf.org

Raptor Computing Systems, LLC sales@raptorcs.com

Image of Talos II by Raptor Computing Systems, LLC Copyright 2018 licensed under CC-BY-SA 4.0.

Categories: FLOSS Project Planets

FSF News: LibrePlanet returns in 2020 to Free the Future! March 14-15, Boston area

GNU Planet! - Thu, 2019-11-07 12:10

BOSTON, Massachusetts, USA -- Thursday, November 7, 2019 -- The Free Software Foundation (FSF) today announced that registration is open for the twelfth LibrePlanet conference on free software. The annual technology and social justice conference will be held in the Boston area on March 14 and 15, 2020, with the theme "Free the Future." Session proposals will be accepted through November 20.

The FSF invites activists, hackers, law professionals, artists, students, developers, young people, policymakers, tinkerers, newcomers to free software, and anyone looking for technology that respects their freedom to register to attend, and to submit a proposal for a session for LibrePlanet: "Free the Future."

Submissions to the call for sessions are being accepted through Wednesday, November 20, 2019, at 12:00 EST (17:00 UTC).

LibrePlanet provides an opportunity for community activists, domain experts, and people seeking solutions for themselves to come together in order to discuss current issues in technology and ethics.

"LibrePlanet attendees and speakers will be discussing the hot button issues we've all been reading about every day, and their connection to the free software movement. How do you fight Facebook? How do we make software-driven cars safe? How do we stop algorithms from making terrible, unreviewable decisions? How do we enjoy the convenience of mobile phones and digital home assistants without being constantly under surveillance? What is the future of digital currency? Can we have an Internet that facilitates respectful dialogue?" said FSF's executive director, John Sullivan.

The free software community has continuously demanded that users and developers be permitted to understand, study, and alter the software they use, offering hope and solutions for a free technological future. LibrePlanet speakers will display their unique combination of digital knowledge and educational skills in the two day conference, as well as give more insights into their ethical dedication to envision a future rich with free "as in freedom" software and without network services that mistreat their users. The FSF's LibrePlanet 2020 edition is therefore aptly named "Free the Future."

"For each new technological convenience we gain, it seems that we lose even more in the process. To exchange intangible but vital rights to freedom and privacy for the latest new gadget can make the future of software seem bleak," said ZoĂŤ Kooyman, program manager for the FSF. "But there is resistance, and it is within our capabilities to reject this outcome."

Thousands of people have attended LibrePlanet over the years, both in person and remotely. The conference welcomes visitors from up to 15 countries each year, with many more joining online. Hundreds of impressive free software speaker sessions, including keynote talks by Edward Snowden and Cory Doctorow, can be viewed on the conference's MediaGoblin instance, in anticipation of further program announcements.

For those who cannot attend LibrePlanet in person, there are plenty of other ways to participate remotely. The FSF is encouraging free software advocates worldwide to use the tools provided on libreplanet.org to host satellite viewing parties and other events. They also opened applications for scholarships for people around the globe to attend the conference in Boston, and encourage supporters who are able to help others attend by donating to the LibrePlanet travel fund.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at https://www.fsf.org and https://www.gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. Its headquarters are in Boston, MA, USA.

MEDIA CONTACT

ZoĂŤ Kooyman
Program Manager
Free Software Foundation
+1 (617) 542 5942
campaigns@fsf.org

Categories: FLOSS Project Planets

Texas Creative: Composer & Drupal for Beginners

Planet Drupal - Thu, 2019-11-07 11:00

Learn the ins-and-outs of installing and managing Drupal 8 using Composer, the PHP dependency manager. Composer reads a configuration file for a project, then determines all of the underlying software that the project needs in order to work, along with what versions of those applications are compatible with all parts of the project.

Read More
Categories: FLOSS Project Planets

Christopher Allan Webber: Terminal Phase: building a space shooter that runs in your terminal

GNU Planet! - Thu, 2019-11-07 10:15

Terminal Phase running in cool-retro-term.

Yeah you read (and saw, via the above gif) that right! A space shooter! That runs in your terminal! Pew pew pew!

Well it's most of one, anyway. It's a prototype that I built as a test program for Spritely Goblins.

I've satisfied the technical needs I had in building the program; I might still finish it as a game, and it's close enough where making a satisfying game rather than just a short demo is super feasible, but I've decided to see whether or not there's actually enough interest in that at all by leaving that as a milestone on my Patreon. (We're actually getting quite close to meeting it... would be cool if it happened!)

But what am I, a person who is mostly known for work on a federated social web protocol, doing making a game demo, especially for a singleplayer game? Was it just for fun? It turns out it has more to do with my long term plans for the federated social web than it may appear.

And while it would be cool to get something out there that I would be proud of for its entertainment value, in the meanwhile the most interesting aspects of this demo to me are actually the technical ones. I thought I'd walk through what those are in this post, because in a sense it's a preview of some of the stuff ahead in Spritely. (Now that I've written most of this post, I have to add the forewarning that this blogpost wanders a lot, but I hope all the paths it goes down are sufficiently interesting.)

Racket and game development

Before we get to the Spritely stuff, I want to talk about why I think Racket is an underappreciated environment for making games. Well, in a certain sense Racket been advertised for its association with game-making, but in some more obscure ways. Let's review those:

  • Racket has historically been built as an educational teaching-programming environment, for two audiences: middle schoolers, and college freshmen. Both of them to some degree involve using the big-bang "toy" game engine. It is, by default, functional in its design, though you can mix it with imperative constructs. And yet maybe to some that makes it sound like it would be more complicated, but it's very easy to pick up. (DrRacket, Racket's bundled editor, also makes this a lot easier.) If middle schoolers can learn it, so can you.
  • Along those lines, there's a whole book called Realm of Racket that's all about learning to program by making games. It's pretty good!
  • Game studio Naughty Dog has used Racket to build custom in-house tools for their games.
  • Pioneering game developer John Carmack built prototypes for the Oculus Rift using Racket. (It didn't become production code, though John's assessments of Racket's strengths appear to align with my own largely; I was really pleased to see him repost on twitter my blogpost about Racket being an acceptable Python. He did give a caveat, however.)

So, maybe you've already heard about Racket used in a game development context, but despite that I think most people don't really know how to start using Racket as a game development environment. It doesn't help that the toy game engine, big-bang, is slowed down by a lot of dynamic safety checks that are there to help newcomers learn about common errors.

It turns out there are a bunch of really delightful game development tools for Racket, but they aren't very well advertised and don't really have a nice comprehensive tutorial that explains how to get started with them. (I've written up one hastily for a friend; maybe I should polish it up and release it as its own document.) Most of these are written by Racket developer Jay McCarthy, whom I consider to be one of those kind of "mad genius hacker" types. They're fairly lego-like, so here's a portrait of what I consider to be the "Jay McCarthy game development stack":

  • Lux, a functional game engine loop. Really this is basically "big-bang for grown-ups"; the idea is very similar but the implementation is much faster. (It has some weird naming conventions in it though; turns out there's a funny reason for that...)
  • By default Lux ships with a minimal rendering engine based on the Racket drawing toolkit. Combined with (either of) Racket's functional picture combinator libraries (pict or the 2htdp/image library), this can be used to make game prototypes very quickly, but they're still likely to be quite slow. Fortunately, Lux has a clever design by which you can swap out different rendering engines (and input mechanisms) which can compose with it.
  • One of these engines is mode-lambda which has the hilarious tagline of "the best 2d graphics of the 90s, today!" If you're making 2d games, this library is very fast. What's interesting about this library is that it's more or less designed off of the ideas from the Super Nintendo graphics engine (including support for Mode-7 style graphics, the graphic hack that powered games like the Super Nintendo version of Mario Kart). Jay talks about this in his "Get Bonus!" talk from a few years ago. (As a side note, for a while I was confused about Get Bonus's relationship to the rest of the Jay McCarthy stack; at 2018's RacketCon I queried Jay about this and he explained to me that that's more or less the experimental testing grounds for the rest of his libraries, and when the ideas congeal they get factored out. That makes a lot of sense!)
  • Another rendering engine (and useful library in general) is Jay's raart library. This is a functional picture library like pict, but instead of building up raster/vector graphic images, you're building up ascii art. (It turns out that ascii art is a topic of interest for me so I really like raart. Unsurprisingly, this is what's powering Terminal Phase's graphics.)

As I said before, the main problem with these is knowing how to get started. While the reference documentation for each library is quite good, none of them really have a tutorial that show off the core ideas. Fortunately each project does ship with examples in its git repository; I recommend looking at each one. What I did was simply split my editor and type in each example line by line so I could think about what it was doing. But yeah, we really could use a real tutorial for this stuff.

Okay, so 2d graphical and terminal programs are both covered. What about 3d? Well, there's really two options so far:

  • There's a very promising library called Pict3d that does functional 3d combinators. The library is so very cool and I highly, highly recommend you watch the Pict3d RacketCon talk which is hands down one of the coolest videos I've ever seen. If you use DrRacket, you can compose together shapes and not only will they display at the REPL, you can rotate around and view them from different angles interactively. Unfortunately it hasn't seen much love the last few years and it also mostly only provides tools out of the box for very basic geometric primitives. Most people developing 3d games would want to import their meshes and also their skeletal animations and etc etc and Pict3d doesn't really provide any of that yet. I don't think it even has support for textures. It could maybe have those things, but probably needs development help. At the moment it's really optimized for building something with very abstract geometry; a 3d space shooter that resembles the original Star Fox game could be a good fit. (Interestingly my understanding is that Pict3d doesn't really store things as meshes; I think it may store shapes in their abstract math'y form and raytrace on the GPU? I could be wrong about this.)
  • There's also an OpenGL library for Racket but it's very, very low level. (There's also this other OpenGL library but I haven't really looked at it.) However raw OpenGL is so extremely low level that you'll need to build a lot of abstractions on top of it to make it usable for anything.

So that covers the game loop, input, display.

That leaves us with audio and game behavior. I'll admit that I haven't researched audio options sufficiently; rsound seems like a good fit though. (I really need a proper Guix package of this for it to work on my machine for FFI-finding-the-library reasons...)

As for game behavior, that kind of depends on what your game is. When I wrote Racktris I really only had a couple of pieces of state to manage (the grid, where the falling piece was, etc) and it was fairly easy to do it using very basic functional programming constructs (really you're just creating a new version of these objects every time that a tick or input happens). As soon as I tried moving to a game with many independently operating objects doing their own thing, that approach fell apart.

The next thing I tried using was Racket's classes and object system. That was... okay, but it also felt like I was losing out on a lot. Previously I had the pleasant experience that "Woo, yeah! It's all functional!" A functional game engine is nice because returning to a prior snapshot in time is always possible (you can trivially make a game engine where you can rewind time, for instance; great for debugging) and it's much easier to poke and prod at a structure because you aren't changing it, you just are getting it back. Now I lost that feature.

In that conversation I had with Jay at RacketCon 2018, he suggested that I look at his DOS library for game behavior. I won't as quickly recommend this one; it's good for some game designs but the library (particularly the DOS/Win part) is fairly opinionated against objects ("processes") communicating with each other on the same tick; the idea is that each processes only can read what each other wrote on the previous tick. The goal here as I understand it is to eliminate an entire class of bugs and race conditions, but I quickly found that trying to work around the restrictions lead me to creating terrible hacks that were themselves very buggy.

This became really clear to me when I tried to implement a very simple wild west "quick draw" game a-la the Kirby quick draw and Samurai Kirby type games. (All this happened months back, when I was doing the anniversary animation I made earlier this year.) These are very simple games where two players wait for "Draw!" to appear on the screen before they press the button to "fire their gun". Fire first, you win. Fire second, you lose. Fire too early, you lose. Both fire at the same time, you draw. This is a very simple game, but trying to build it on top of DOS/Win (or my DOS/Hurd variant) was extremely hard to do while splitting the judge and players into separate objects. I ended up writing very contorted code that ultimately did communicate on the same tick, but via a layered approach that ended up taking me an hour to track down all the bugs in. I can't imagine scaling it up further.

But DOS had some good ideas, and I got to thinking about how to extend the system to allow for immediate calls, what would it look like? That's when I hit a series of epiphanies which resulted in a big rewrite of the Spritely Goblins codebase (which turned out to make it more useful for programming game behavior in a way that even fits very nicely into the Lux game loop). But I suppose I should really explain the what and why of Spritely Goblins, and how it fits into the larger goals of Spritely.

Terminal Phase and Spritely Goblins and stuff

Spritely Goblins is part of the larger Spritely project. Given Spritely's ambitious goal of "leveling up" the fediverse by extending it into the realm of rich and secure virtual worlds, we have to support distributed programming in a way that assumes a mutually suspicious network. (To get in the right mindset for this, maybe both watch my keynote and Mark Miller's keynote from the ActivityPub conference.) We really want to bake that in at the foundation of our design to build this right.

Thus Spritely Goblins is an asynchronous actor-ish distributed programming system on top of Racket. Kind of like Erlang, but with a focus on object capability security. Most of the good ideas have been taken from the E programming language ("the most interesting programming language you've never heard of"). The only programming environments I would consider viable to build Spritely on top of are ones that have been heavily informed by E, the best other candidate being the stuff Agoric is building on top of Javascript, such as their SwingSet architecture and Jessie (big surprise, since the folks behind E are mostly the folks behind Agoric), or some other more obscure language environments like Monte or, yes, Goblins. (Though, currently despite hanging out in Racket-land, which drinks deeply into the dream of everyone building their own languages, Goblins is just a library. If you want to run code you don't trust though, you'll have to wait until I release Spritely Dungeon, which will be a secure module / language restriction system for Racket. All in due time.)

Spritely Goblins already has some interesting properties:

  • All objects/actors are actually just procedures, waiting to be invoked! All "state" is merely the lexical scope of the enclosed procedure. Upon being invoked, a procedure can both return a value to its invoker (or in asynchronous programming, that fulfills the promise it is listening to) as well as specify what the next version of itself should be (ie, what procedure should be called the next time it handles a message).
  • Objects can only invoke other objects they have a reference to. This, surprisingly, is a sufficient security model as the foundation for everything we need (well, plus sealers/unsealers but I won't get into those here). This is the core observation from Jonathan Rees's A Security Kernel Based on the Lambda Calculus; object capability security is really just everyday programming via argument passing, which pretty much all programmers know how to do. (This applies to modules too, but more on that in a future post.)
  • In most cases, objects live in a "vat". This strange term from the object capability literature really means an event loop. Objects/actors can send messages to other objects in other vats; for the most part it doesn't matter where (on what machine, in what OS process, etc) other objects are when it comes to asynchronous message passing.
  • When asynchronous message passing, information is eventually resolved via promises. (Initially I had promises hidden behind coroutines in the core of the system, but it turns out that opens you to re-entrancy attacks if you aren't very careful. That may come back eventually, but with great care.)
  • While any object can communicate with any other object on any vat via message passing, objects on the same vat can do something that objects on separate vats can't: they can perform immediate calls (ie, something that looks like normal straight-ahead programming code, no coroutines required: you invoke the other object like a procedure, and it returns with a value). It turns out this is needed if you want to implement many interesting transactional things like financial instruments built on top of pure object capabilities. This also is nice for something like a game like Terminal Phase, where we really aren't doing anything asynchronous, are running on a fixed frame rate, and want to be deterministic. But a user should remember that (for important reasons I won't get into in this post) that immediate calls are strictly less universal than asynchronous message passing, since those can only be done between objects in the same vat. It's pleasant that Goblins can support both methods of development, including in an intermixed environment.
  • There is actually a lower level of abstraction than a vat, it turns out! This is something that is different than both E and Agoric's SwingSet I think and maybe even mildly novel; all the core operations (receiving a message, spawning an actor, etc) to operate on the actormap datastructure are exposed to the user. Furthermore, all of these operations are transactional! When using the lower-level actormap, the user receives a new actormap (a "transactormap") which is a delta to the parent actormap (either another transactormap or the root protected weak-hashtable actormap, a "whactormap").
  • This transactionality is really exciting. It means that if something bad happens, we can always roll back to a safe state (or rather, never commit the unsafe state at all). In the default vat, if a message is received and an uncaught exception occurs, the promise is broken, but all the effects caused by interactions from unhandling the message are as if they never occured. (Well that is, as long as we use the "become this procedure" mechanism in Goblins to manage state! If you mutate a variable, you're on your own. A Racket #lang could prevent your users from doing such naughty things if you so care.)
  • It also means that snapshotting an actormap is really easy. Elm used to advertise having a "time traveling debugger" where they showed off Mario running around, and you could reverse time to a previous state. Apparently this was removed but maybe is coming back. Anyway it's trivial to do such a thing with Goblins' actormap, and I built such a (unpublished due to being unpolished) demo.
  • Most users won't work with the actormap though, they'll work with the builtin vat that takes care of all this stuff for them. You can build your own vat, or vat-like tools, though.

Anyway, all the above works and exists. Actors can even speak to each other across vats... though, what's missing so far is the ability to talk to other objects/vats on other machines. That's basically what's next on my agenda, and I know how to do it... it's just a matter of getting the stuff done.

Well, the other thing that's missing is documentation. That's competing for next thing on the agenda.

But why a synchronous game right now?

If the really exciting stuff is the distributed secure programming stuff, why did I stop to do a synchronous non-distributed game on top of Spritely Goblins? Before I plowed ahead, given that the non-distributed aspects still rest on the distributed aspects, I wanted to make sure that the fundamentals of Spritely Goblins were good.

A space shooter is simple enough to implement and using ascii art in a terminal meant I didn't need to spend too much time thinking about graphics (plus it's an interesting area that's under-explored... most terminal-based games are roguelikes or other turn-based games, not real time). Implementing it allowed me to find many areas that could be improved usability-wise in Goblins (indeed, it's been a very active month of development for Goblins). You really know what things are and aren't nice designs by using them.

It's also a great way to identify performance bottlenecks. I calculated that roughly 1 million actor invocations could happen per second on my cheapo laptop... not bad. But that was when the actors didn't update themselves; when it came to the transactional updates, I could only seem to achieve about 65k updates per second. I figured this must be the transactionality, but it turns out it wasn't; the transactionality feature is very cheap. Can you believe that I got a jump from 65k updates per second to 680k updates per second just by switching from a Racket contract to a manual predicate check? (I expected a mild performance hit for using a contract over a manual predicate, but 10x...?) (I also added a feature so you can "recklessly" commit directly to the actormap without transactions... I don't recommend this for all applications, but if you do that you can get up to about 790k updates per second... which means that transactionality adds only about a 17% overhead, which isn't even close to the 10x gap I was seeing.) Anyway, the thing that lead me to looking into that in the first place was doing an experiment where I decided I wanted to see how many objects I could have updating at once. I might not have caught it otherwise. So making a game demo is useful for that kind of thing.

I feel now that I've gotten most of the churn out of that layer of the design out of the way so that I can move forward with the design on the distributed side of things next. That allows me to have tighter focus of things in layers, and I'm happy about that.

What's next?

So with that out of the way, the next task is to work on both the mutually suspicious distributed programming environment and the documentation. I'm not sure in which order, but I guess we'll find out.

I'll do something similar with the distributed programming environment as well... I plan to write something basic which resembles a networked game at this stage to help me ensure that the components work nicely together.

In the meanwhile, Terminal Phase is very close to being a nice game to play, but I'm deciding to leave that as a funding milestone on my Patreon. This is because, as far as my technical roadmap has gone, Terminal Phase has performed the role it needs to play. But it would be fun to have, and I'm sure other people would like to play it as a finished game (heck, I would like to play it as a finished game), but I'd like to know... do people actually care enough about free software games? About this direction of work? Am I on the right track? Not to mention that funding this work is also simply damn hard.

But, at the time of writing we're fairly close, (about 85% of the way there), so maybe it will happen. If it sounds fun to you, maybe pitch in.

But one way or another, I'll have interesting things to announce ahead. Stay tuned here, or follow me on the fediverse or on Twitter if you so prefer.

Onwards and upwards!

Categories: FLOSS Project Planets

Christopher Allan Webber: Terminal Phase: building a space shooter that runs in your terminal

GNU Planet! - Thu, 2019-11-07 10:15

Terminal Phase running in cool-retro-term.

Yeah you read (and saw, via the above gif) that right! A space shooter! That runs in your terminal! Pew pew pew!

Well it's most of one, anyway. It's a prototype that I built as a test program for Spritely Goblins.

I've satisfied the technical needs I had in building the program; I might still finish it as a game, and it's close enough where making a satisfying game rather than just a short demo is super feasible, but I've decided to see whether or not there's actually enough interest in that at all by leaving that as a milestone on my Patreon. (We're actually getting quite close to meeting it... would be cool if it happened!)

But what am I, a person who is mostly known for work on a federated social web protocol, doing making a game demo, especially for a singleplayer game? Was it just for fun? It turns out it has more to do with my long term plans for the federated social web than it may appear.

And while it would be cool to get something out there that I would be proud of for its entertainment value, in the meanwhile the most interesting aspects of this demo to me are actually the technical ones. I thought I'd walk through what those are in this post, because in a sense it's a preview of some of the stuff ahead in Spritely. (Now that I've written most of this post, I have to add the forewarning that this blogpost wanders a lot, but I hope all the paths it goes down are sufficiently interesting.)

Racket and game development

Before we get to the Spritely stuff, I want to talk about why I think Racket is an underappreciated environment for making games. Well, in a certain sense Racket been advertised for its association with game-making, but in some more obscure ways. Let's review those:

  • Racket has historically been built as an educational teaching-programming environment, for two audiences: middle schoolers, and college freshmen. Both of them to some degree involve using the big-bang "toy" game engine. It is, by default, functional in its design, though you can mix it with imperative constructs. And yet maybe to some that makes it sound like it would be more complicated, but it's very easy to pick up. (DrRacket, Racket's bundled editor, also makes this a lot easier.) If middle schoolers can learn it, so can you.
  • Along those lines, there's a whole book called Realm of Racket that's all about learning to program by making games. It's pretty good!
  • Game studio Naughty Dog has used Racket to build custom in-house tools for their games.
  • Pioneering game developer John Carmack built prototypes for the Oculus Rift using Racket. (It didn't become production code, though John's assessments of Racket's strengths appear to align with my own largely; I was really pleased to see him repost on twitter my blogpost about Racket being an acceptable Python. He did give a caveat, however.)

So, maybe you've already heard about Racket used in a game development context, but despite that I think most people don't really know how to start using Racket as a game development environment. It doesn't help that the toy game engine, big-bang, is slowed down by a lot of dynamic safety checks that are there to help newcomers learn about common errors.

It turns out there are a bunch of really delightful game development tools for Racket, but they aren't very well advertised and don't really have a nice comprehensive tutorial that explains how to get started with them. (I've written up one hastily for a friend; maybe I should polish it up and release it as its own document.) Most of these are written by Racket developer Jay McCarthy, whom I consider to be one of those kind of "mad genius hacker" types. They're fairly lego-like, so here's a portrait of what I consider to be the "Jay McCarthy game development stack":

  • Lux, a functional game engine loop. Really this is basically "big-bang for grown-ups"; the idea is very similar but the implementation is much faster. (It has some weird naming conventions in it though; turns out there's a funny reason for that...)
  • By default Lux ships with a minimal rendering engine based on the Racket drawing toolkit. Combined with (either of) Racket's functional picture combinator libraries (pict or the 2htdp/image library), this can be used to make game prototypes very quickly, but they're still likely to be quite slow. Fortunately, Lux has a clever design by which you can swap out different rendering engines (and input mechanisms) which can compose with it.
  • One of these engines is mode-lambda which has the hilarious tagline of "the best 2d graphics of the 90s, today!" If you're making 2d games, this library is very fast. What's interesting about this library is that it's more or less designed off of the ideas from the Super Nintendo graphics engine (including support for Mode-7 style graphics, the graphic hack that powered games like the Super Nintendo version of Mario Kart). Jay talks about this in his "Get Bonus!" talk from a few years ago. (As a side note, for a while I was confused about Get Bonus's relationship to the rest of the Jay McCarthy stack; at 2018's RacketCon I queried Jay about this and he explained to me that that's more or less the experimental testing grounds for the rest of his libraries, and when the ideas congeal they get factored out. That makes a lot of sense!)
  • Another rendering engine (and useful library in general) is Jay's raart library. This is a functional picture library like pict, but instead of building up raster/vector graphic images, you're building up ascii art. (It turns out that ascii art is a topic of interest for me so I really like raart. Unsurprisingly, this is what's powering Terminal Phase's graphics.)

As I said before, the main problem with these is knowing how to get started. While the reference documentation for each library is quite good, none of them really have a tutorial that show off the core ideas. Fortunately each project does ship with examples in its git repository; I recommend looking at each one. What I did was simply split my editor and type in each example line by line so I could think about what it was doing. But yeah, we really could use a real tutorial for this stuff.

Okay, so 2d graphical and terminal programs are both covered. What about 3d? Well, there's really two options so far:

  • There's a very promising library called Pict3d that does functional 3d combinators. The library is so very cool and I highly, highly recommend you watch the Pict3d RacketCon talk which is hands down one of the coolest videos I've ever seen. If you use DrRacket, you can compose together shapes and not only will they display at the REPL, you can rotate around and view them from different angles interactively. Unfortunately it hasn't seen much love the last few years and it also mostly only provides tools out of the box for very basic geometric primitives. Most people developing 3d games would want to import their meshes and also their skeletal animations and etc etc and Pict3d doesn't really provide any of that yet. I don't think it even has support for textures. It could maybe have those things, but probably needs development help. At the moment it's really optimized for building something with very abstract geometry; a 3d space shooter that resembles the original Star Fox game could be a good fit. (Interestingly my understanding is that Pict3d doesn't really store things as meshes; I think it may store shapes in their abstract math'y form and raytrace on the GPU? I could be wrong about this.)
  • There's also an OpenGL library for Racket but it's very, very low level. (There's also this other OpenGL library but I haven't really looked at it.) However raw OpenGL is so extremely low level that you'll need to build a lot of abstractions on top of it to make it usable for anything.

So that covers the game loop, input, display.

That leaves us with audio and game behavior. I'll admit that I haven't researched audio options sufficiently; rsound seems like a good fit though. (I really need a proper Guix package of this for it to work on my machine for FFI-finding-the-library reasons...)

As for game behavior, that kind of depends on what your game is. When I wrote Racktris I really only had a couple of pieces of state to manage (the grid, where the falling piece was, etc) and it was fairly easy to do it using very basic functional programming constructs (really you're just creating a new version of these objects every time that a tick or input happens). As soon as I tried moving to a game with many independently operating objects doing their own thing, that approach fell apart.

The next thing I tried using was Racket's classes and object system. That was... okay, but it also felt like I was losing out on a lot. Previously I had the pleasant experience that "Woo, yeah! It's all functional!" A functional game engine is nice because returning to a prior snapshot in time is always possible (you can trivially make a game engine where you can rewind time, for instance; great for debugging) and it's much easier to poke and prod at a structure because you aren't changing it, you just are getting it back. Now I lost that feature.

In that conversation I had with Jay at RacketCon 2018, he suggested that I look at his DOS library for game behavior. I won't as quickly recommend this one; it's good for some game designs but the library (particularly the DOS/Win part) is fairly opinionated against objects ("processes") communicating with each other on the same tick; the idea is that each processes only can read what each other wrote on the previous tick. The goal here as I understand it is to eliminate an entire class of bugs and race conditions, but I quickly found that trying to work around the restrictions lead me to creating terrible hacks that were themselves very buggy.

This became really clear to me when I tried to implement a very simple wild west "quick draw" game a-la the Kirby quick draw and Samurai Kirby type games. (All this happened months back, when I was doing the anniversary animation I made earlier this year.) These are very simple games where two players wait for "Draw!" to appear on the screen before they press the button to "fire their gun". Fire first, you win. Fire second, you lose. Fire too early, you lose. Both fire at the same time, you draw. This is a very simple game, but trying to build it on top of DOS/Win (or my DOS/Hurd variant) was extremely hard to do while splitting the judge and players into separate objects. I ended up writing very contorted code that ultimately did communicate on the same tick, but via a layered approach that ended up taking me an hour to track down all the bugs in. I can't imagine scaling it up further.

But DOS had some good ideas, and I got to thinking about how to extend the system to allow for immediate calls, what would it look like? That's when I hit a series of epiphanies which resulted in a big rewrite of the Spritely Goblins codebase (which turned out to make it more useful for programming game behavior in a way that even fits very nicely into the Lux game loop). But I suppose I should really explain the what and why of Spritely Goblins, and how it fits into the larger goals of Spritely.

Terminal Phase and Spritely Goblins and stuff

Spritely Goblins is part of the larger Spritely project. Given Spritely's ambitious goal of "leveling up" the fediverse by extending it into the realm of rich and secure virtual worlds, we have to support distributed programming in a way that assumes a mutually suspicious network. (To get in the right mindset for this, maybe both watch my keynote and Mark Miller's keynote from the ActivityPub conference.) We really want to bake that in at the foundation of our design to build this right.

Thus Spritely Goblins is an asynchronous actor-ish distributed programming system on top of Racket. Kind of like Erlang, but with a focus on object capability security. Most of the good ideas have been taken from the E programming language ("the most interesting programming language you've never heard of"). The only programming environments I would consider viable to build Spritely on top of are ones that have been heavily informed by E, the best other candidate being the stuff Agoric is building on top of Javascript, such as their SwingSet architecture and Jessie (big surprise, since the folks behind E are mostly the folks behind Agoric), or some other more obscure language environments like Monte or, yes, Goblins. (Though, currently despite hanging out in Racket-land, which drinks deeply into the dream of everyone building their own languages, Goblins is just a library. If you want to run code you don't trust though, you'll have to wait until I release Spritely Dungeon, which will be a secure module / language restriction system for Racket. All in due time.)

Spritely Goblins already has some interesting properties:

  • All objects/actors are actually just procedures, waiting to be invoked! All "state" is merely the lexical scope of the enclosed procedure. Upon being invoked, a procedure can both return a value to its invoker (or in asynchronous programming, that fulfills the promise it is listening to) as well as specify what the next version of itself should be (ie, what procedure should be called the next time it handles a message).
  • Objects can only invoke other objects they have a reference to. This, surprisingly, is a sufficient security model as the foundation for everything we need (well, plus sealers/unsealers but I won't get into those here). This is the core observation from Jonathan Rees's A Security Kernel Based on the Lambda Calculus; object capability security is really just everyday programming via argument passing, which pretty much all programmers know how to do. (This applies to modules too, but more on that in a future post.)
  • In most cases, objects live in a "vat". This strange term from the object capability literature really means an event loop. Objects/actors can send messages to other objects in other vats; for the most part it doesn't matter where (on what machine, in what OS process, etc) other objects are when it comes to asynchronous message passing.
  • When asynchronous message passing, information is eventually resolved via promises. (Initially I had promises hidden behind coroutines in the core of the system, but it turns out that opens you to re-entrancy attacks if you aren't very careful. That may come back eventually, but with great care.)
  • While any object can communicate with any other object on any vat via message passing, objects on the same vat can do something that objects on separate vats can't: they can perform immediate calls (ie, something that looks like normal straight-ahead programming code, no coroutines required: you invoke the other object like a procedure, and it returns with a value). It turns out this is needed if you want to implement many interesting transactional things like financial instruments built on top of pure object capabilities. This also is nice for something like a game like Terminal Phase, where we really aren't doing anything asynchronous, are running on a fixed frame rate, and want to be deterministic. But a user should remember that (for important reasons I won't get into in this post) that immediate calls are strictly less universal than asynchronous message passing, since those can only be done between objects in the same vat. It's pleasant that Goblins can support both methods of development, including in an intermixed environment.
  • There is actually a lower level of abstraction than a vat, it turns out! This is something that is different than both E and Agoric's SwingSet I think and maybe even mildly novel; all the core operations (receiving a message, spawning an actor, etc) to operate on the actormap datastructure are exposed to the user. Furthermore, all of these operations are transactional! When using the lower-level actormap, the user receives a new actormap (a "transactormap") which is a delta to the parent actormap (either another transactormap or the root protected weak-hashtable actormap, a "whactormap").
  • This transactionality is really exciting. It means that if something bad happens, we can always roll back to a safe state (or rather, never commit the unsafe state at all). In the default vat, if a message is received and an uncaught exception occurs, the promise is broken, but all the effects caused by interactions from unhandling the message are as if they never occured. (Well that is, as long as we use the "become this procedure" mechanism in Goblins to manage state! If you mutate a variable, you're on your own. A Racket #lang could prevent your users from doing such naughty things if you so care.)
  • It also means that snapshotting an actormap is really easy. Elm used to advertise having a "time traveling debugger" where they showed off Mario running around, and you could reverse time to a previous state. Apparently this was removed but maybe is coming back. Anyway it's trivial to do such a thing with Goblins' actormap, and I built such a (unpublished due to being unpolished) demo.
  • Most users won't work with the actormap though, they'll work with the builtin vat that takes care of all this stuff for them. You can build your own vat, or vat-like tools, though.

Anyway, all the above works and exists. Actors can even speak to each other across vats... though, what's missing so far is the ability to talk to other objects/vats on other machines. That's basically what's next on my agenda, and I know how to do it... it's just a matter of getting the stuff done.

Well, the other thing that's missing is documentation. That's competing for next thing on the agenda.

But why a synchronous game right now?

If the really exciting stuff is the distributed secure programming stuff, why did I stop to do a synchronous non-distributed game on top of Spritely Goblins? Before I plowed ahead, given that the non-distributed aspects still rest on the distributed aspects, I wanted to make sure that the fundamentals of Spritely Goblins were good.

A space shooter is simple enough to implement and using ascii art in a terminal meant I didn't need to spend too much time thinking about graphics (plus it's an interesting area that's under-explored... most terminal-based games are roguelikes or other turn-based games, not real time). Implementing it allowed me to find many areas that could be improved usability-wise in Goblins (indeed, it's been a very active month of development for Goblins). You really know what things are and aren't nice designs by using them.

It's also a great way to identify performance bottlenecks. I calculated that roughly 1 million actor invocations could happen per second on my cheapo laptop... not bad. But that was when the actors didn't update themselves; when it came to the transactional updates, I could only seem to achieve about 65k updates per second. I figured this must be the transactionality, but it turns out it wasn't; the transactionality feature is very cheap. Can you believe that I got a jump from 65k updates per second to 680k updates per second just by switching from a Racket contract to a manual predicate check? (I expected a mild performance hit for using a contract over a manual predicate, but 10x...?) (I also added a feature so you can "recklessly" commit directly to the actormap without transactions... I don't recommend this for all applications, but if you do that you can get up to about 790k updates per second... which means that transactionality adds only about a 17% overhead, which isn't even close to the 10x gap I was seeing.) Anyway, the thing that lead me to looking into that in the first place was doing an experiment where I decided I wanted to see how many objects I could have updating at once. I might not have caught it otherwise. So making a game demo is useful for that kind of thing.

I feel now that I've gotten most of the churn out of that layer of the design out of the way so that I can move forward with the design on the distributed side of things next. That allows me to have tighter focus of things in layers, and I'm happy about that.

What's next?

So with that out of the way, the next task is to work on both the mutually suspicious distributed programming environment and the documentation. I'm not sure in which order, but I guess we'll find out.

I'll do something similar with the distributed programming environment as well... I plan to write something basic which resembles a networked game at this stage to help me ensure that the components work nicely together.

In the meanwhile, Terminal Phase is very close to being a nice game to play, but I'm deciding to leave that as a funding milestone on my Patreon. This is because, as far as my technical roadmap has gone, Terminal Phase has performed the role it needs to play. But it would be fun to have, and I'm sure other people would like to play it as a finished game (heck, I would like to play it as a finished game), but I'd like to know... do people actually care enough about free software games? About this direction of work? Am I on the right track? Not to mention that funding this work is also simply damn hard.

But, at the time of writing we're fairly close, (about 85% of the way there), so maybe it will happen. If it sounds fun to you, maybe pitch in.

But one way or another, I'll have interesting things to announce ahead. Stay tuned here, or follow me on the fediverse or on Twitter if you so prefer.

Onwards and upwards!

Categories: FLOSS Project Planets

You can use a C++11 range for loop over a static array

Planet KDE - Thu, 2019-11-07 10:02
void DeviceListing::populateListing(const show showStatus)
{
const Solid::DeviceInterface::Type needHardware[] = {
Solid::DeviceInterface::Processor,
Solid::DeviceInterface::StorageDrive,
Solid::DeviceInterface::Battery,
Solid::DeviceInterface::PortableMediaPlayer,
Solid::DeviceInterface::Camera
};

clear();

for (unsigned int i = 0; i < (sizeof(needHardware)/sizeof(Solid::DeviceInterface::Type)); i++) {
QTreeWidgetItem *tmpDevice = createListItems(needHardware[i]);
deviceMap[needHardware[i]] = static_cast(tmpDevice);

if ((tmpDevice->childCount() > 0) || (showStatus == ALL)) {
addTopLevelItem(tmpDevice);
}
}
}

in C++11 you can rewrite it in the much easier to read

void DeviceListing::populateListing(const show showStatus)
{
const Solid::DeviceInterface::Type needHardware[] = {
Solid::DeviceInterface::Processor,
Solid::DeviceInterface::StorageDrive,
Solid::DeviceInterface::Battery,
Solid::DeviceInterface::PortableMediaPlayer,
Solid::DeviceInterface::Camera
};

clear();

for (const Solid::DeviceInterface::Type nh : needHardware) {
QTreeWidgetItem *tmpDevice = createListItems(nh);
deviceMap[nh] = static_cast(tmpDevice);

if ((tmpDevice->childCount() > 0) || (showStatus == ALL)) {
addTopLevelItem(tmpDevice);
}
}
}
Categories: FLOSS Project Planets

Drudesk: Beautiful websites with new main Drupal 9's Olivero theme

Planet Drupal - Thu, 2019-11-07 07:57

All beautiful websites have one thing in common — a good theme. It should be not only beautiful but mobile responsive, accessible, and ready to support modern website functionality.

This is exactly about Olivero — the future main front-end theme for Drupal 9. And this future is just around the corner because Drupal 9 is coming! And it is bringing the era of beautiful websites.

Categories: FLOSS Project Planets

TomThorp.me Blog: How to build beautiful CERN websites

Planet Drupal - Thu, 2019-11-07 07:02
How to build beautiful CERN websites using Drupal 8
Categories: FLOSS Project Planets

TomThorp.me Blog: Nasty PHP7 bug exploit in the wild

Planet Drupal - Thu, 2019-11-07 06:26
New PHP7 bug CVE-2019-11043 can allow even non-technical attackers to take over servers.
Categories: FLOSS Project Planets

Shirish Agarwal: A tale of unfortunate coincidences and incidents

Planet Debian - Thu, 2019-11-07 03:40

This would be a biggie so please relax as this is going to be a really long one. So make sure you have something to chill out with.

People talk about ‘ease of doing business’ or what not for quite sometime. From last few months, there has been a slow-down and due to certain issues I had to make some purchases. This blog post will share the travails from a consumer perspective only as the horrors from the other side are also well-known But for this blog post, will share the series of unfortunate coincidences and incidents which happened with me.

APC UPS

For quite few months, my old UPS had been giving issues hence I decided to see what is available in the market. As my requirements are small, two desktops and a laptop sometimes, I settled on APC BR1000G-IN . Instead of buying it from Amazon I decided to try my luck at the local vendors. None of them had this specific UPS which I wanted. Before I continue I want to share the trivia that I had known that Schiedner Electric was buying APC the last time I bought an APC UPS. That was the big news at the time, 2007. At that time when I bought the APC UPS I was given 5 years warranty and the UPS worked till 7-8 years so was pretty happy with it. I also had a local brand I used which worked but didn’t offer anything special such as LED interface on the UPS.

Anyways, with those factors in mind, I went to the APC site and saw the partner list and saw that they were something like 20-25 odd partners in my city, Pune. I sent one or two e-mails to each of the partners of APC, while some were generic with my requirements while in others I was more specific in what I want. I was hoping that at least a few would respond. To my horror, most e-mails bounced back and some were in black-hole which means no one answered. I even tried calling some of the numbers given of the partners and even they turned out to be either fake or not working, which of the two it is I don’t know.

Luckily, I have been to Mumbai quite often and have few contacts with some people who are into IT sales. One of them, had numbers of some of the people in Mumbai, one of which worked, who in turn directed or shared number of a vendor in Pune. One thing led to another and soon I had BR1000G-IN at my place. Sadly, the whole process was draining and took almost a week to get it at my end. I read the user guide fully, did all the connections and found that the the start/reset button of the UPS is depressed, it doesn’t connect well with my stubby fingers. I asked mother and even she was not able to push the button.

APC Back-UPS Pro 1000 Copyright – APC

As can be seen while it is at the upper corner when trying to on it and do the required settings, it just wouldn’t work. I was not sure whether it was the UPS at fault or my stubby finger at fault. As shared, even my mom was not able to push it. After trying for a day or two, turned to the vendor, had to escalate it up and finally was assigned an engineer who came to check it. When you buy a unit for around INR 10k/- this is the least they can do. Somehow his finger was able to penetrate. For both mum and me, the start/reset button was a design fail. While I can understand it why the original design might have been so, I am sure lot of people like me would have problems with it. Coincidentally, the engineer was from one of the vendors whom I had e-mailed earlier. I showed him the e-mail I had sent to the firm. Sadly, till date the e-mail address hasn’t been corrected. vishal@modularelect.com is still a blackhole while I was told this will be corrected soon but two months and it has still not been corrected

Sadly, my problems still continue somewhat. While I’m thankful to whoever the apcupsd Debian wiki page for some reason it doesn’t work for me. While I have asked the good people at apcupsd-users mailing list I am hopeful will get an answer in a day or two. The good part is that the cable at least is working and giving some status information as shared. Hopefully, it will be some minor thing rather than something major but that only time will tell.

Another bit of trivia – While some of you may have known, some might not, I was also looking if APC bought out the Li-Ion batteries too. As fate would have it, the date I bought the UPS, the same date the Li-Ion batteries were launched or shown on youtube.

While it probably will take a few years, would be looking forward to it. There is also possibility of supercapacitors as well but that is well into the future.

Cooler Master G650M

I remember writing about having a better SMPS about a year back. At the time I was having power problems I also thought it would be prudent to change the SMPS as well. While it didn’t take me much time, while I was looking for 700-750W SMPS I finally had to settle for Cooler Master G650M Bronze SMPS as that was the one which was available. There were others too, 1000W ones (gold) but out of my price range. Even the 650W SMPS costed me around INR 7.5k. This also worked for few months and then shorted out. I was highly surprised when the SMPS conked out, as the warranty is supposed to run for five whole years. While buying I had checked the labelling and it was packed only couple of months before purchase, so not that old. What is most peculiar is that now that product is no longer in the market and in fact had been replaced by Cooler Master MWE Bronze 650 which has 3 year warranty . Why is it so is beyond me. Usually Products which have 5 year warranty or more are usually in the market for a much longer time. Unlike other brands, Cooler Master doesn’t believe in repair but offer replacement but takes anywhere between 2 to 3 weeks which I didn’t knew at the time of purchase

Just to be clear, I wasn’t sure what was wrong. I had bought the ASUS Prime Z270-P motherboard which has LED lighting all around it. I have blogged about it before last year in the same blog post above. What was peculiar that the stock fan above the CPU was not running and nor the cabinet power button, although rest of the motherboard it was showing power so it was peculiar as to what could the problem might be. I have an old voltage detector, something like this from which I could ascertain that I was getting power at points but still couldn’t figure out what was wrong. I did have the old stock SMPS but as have shared before it has lot lesser wattage, says 400 on the label but probably more like 300-325 watts. I removed few of the components from the system before taking it the vendor so it would make it easier for the vendor to tell what is wrong. I assumed that it most probably might be the switch as I usually use reboot all the time whenever I feel the need for reboot, usually after I have some updates and need to refresh my session. The vendor was able to diagnoze within few minutes that the fault lay in the SMPS and not in the switch or anywhere else and asked me to take the unit to the service center for RMA.

While I sent it for RMA, thought could survive for the required time without any internet. But I was wrong. As news in most news channels in India is so stale and biased I found it unbearable to be without news in 2-3 days. I again wondered how people in Kashmir would be without all the facilities that we have.

GRUB 2 missing, UEFI and supergrub2

I went back to the vendor with my old stock SMPS and it worked but found that grub2 menu was missing. It was just plain booting to windows 10. I started a thread at debian-user trying to figure out if there was some issue at my end, maybe some grub variable had got lost or something but the responses seemed to suggest that something else had happened. I also read through some of the UEFI documentation on wikipedia and web, I didn’t go to much depth as that would have been distracting as the specification itself is evolving and is subject to change. I did find some interesting bits and pieces but that is for a later date perhaps. One of the things I remembered from my previous run-ins with grub2 issues is that supergrub2 had been immensely useful. Sadly though, the version which I tried as stable was dumping me to grub rescue instead of the grub menu when I used the ISO image on a thumb drive. I could have tried to make a go for it but was too lazy. On an off-chance I looked at supergrub2 support and did find that somebody else also have had the same exact issue and it was reported. I chimed in and tried one of the beta versions and it worked which made me breathe easier. After getting into debian, I tried the old $ sudo update-grub which usually fixed the issues. I again tried to boot without using the help of the usb disk but failed as it again booted me into MS-Windows environment.

Firmware update

I dunno how or where or how I thought it might be a firmware issue. While trawling via the web I had come across issues which stated similar issues especially either with dual-booting or multi-booting and firmware was one of the issues which was found. Apart from waiting 2 weeks and then perhaps getting a hdd I had no other option than to update the firmware.

Using inxi I was able to get firmware details which I also shared in the github issue tracker page before update.

$ inxi -M Machine: Type: Desktop Mobo: ASUSTeK model: PRIME Z270-P v: Rev X.0x serial: UEFI: American Megatrends v: 0808 date: 06/16/2017

I would ask you to look at the version number and the date. I used Asus’s EZ update utility and downloaded the new UEFI BIOS .pcap file . In EZ update, I just had to give the correct path and couple of restarts later I had the new version of UEFI BIOS as can be seen below.

Asus UEFI BIOS 1205 Copyright – Asus

One thing to note is that there is no unix way that I least I know of updating an UEFI BIOS. If anybody knows, please let me know. I did look for ‘unix .pcacp update’ but most of the tutorials I found out were for network packet sniffing rather than UEFI BIOS update. Maybe it’s an Asus issue. Does anybody know or can point to something ?

The update fixed the EFI code which was also not appearing, as can be seen now via efibootmgr.

$ efibootmgr
BootCurrent: 0004
Timeout: 1 seconds
BootOrder: 0004,0000,0003,0001,0002,0005
Boot0000* Windows Boot Manager
Boot0001* UEFI:CD/DVD Drive
Boot0002* UEFI:Removable DeviceBoot
0003* Hard Drive
Boot0004* debian
Boot0005* UEFI:Network Device

While I’m not going to go into more details but this should be enough –

$ efibootmgr -v | grep debianBoot0004* debian HD(2,GPT,xxxxx)/File(\EFI\DEBIAN\GRUBX64.EFI)..BO

I have hidden some details for privacy sake such as address space as well as GPT hash etc. Finally, Grub 2 menu came to me in all its loveliness –

Grub 2.04-3 debian-edu CC-SA-3.0

There are still some things I wanna fix through, for instance I hope to help out adrian in testing some of his code. I wasn’t able to do because nowadays we get cheap multi-level cell, see for e.g. this paper. I might have mentioned before.

Ending Notes – Powershell

To end I did try to make home even in MS-Windows but the usefulness of shell far outlives anything that is on MS-Windows. I used powershell 5 and even downloaded and installed powershell 6 and even manage to figure out how to get quite some of the utilities to behave similar to how they behave under GNU/Linux but still the Windowing environment itself was much of an irritant than anything else. One of the biggest letdowns was not being able to use touch. While somebody made a powershell module for it but it still needs to be imported for every session. While I’m sure I could have written a small script for the same, my time was beneficial in finding the solution to it. As shared I also learnt about UEFI a bit in the process. Sharing screenshots of powershell 5 and 6 .

Powershell 5 Copyright – Microsoft Corporation Powershell-6 Copyright – Microsoft Corporation

Conclusion – While it doesn’t even cover probably even a quarter of the issues or use-cases but even if one person finds it useful, that is good enough. I have taken lot of shortcuts and not shared whole lot otherwise this would have been lot longer. One of the things I forgot to mention is that I did find some mentions of MS-Windows overwriting, this was in October 2018 as well as October 2019 Security Updates as well. How much to trust the issues that people posted don’t really know.

Categories: FLOSS Project Planets

health @ Savannah: GNU Health 3.6RC3 available at community server & demo database

GNU Planet! - Wed, 2019-11-06 18:54

Dear community

The Release Candidate 3 (RC3) for the upcoming GNU Health 3.6 has been installed in the community server.

You can download the latest GTK client, either using pip (from pypi test repository) or the source tarball as explained in the developer's corner chapter.

Login Info:
    Server information : federation.gnuhealth.org:9555
    Database: ghdemo36rc3
    Username: admin
    Password: gnusolidario

Alternatively, the demo database can be downloaded and installed locally via the demo db installer

    $ bash ./install_demo_database 36rc3

Please download and test the follow files:

  • gnuhealth-3.6RC3.tar.gz: Server with the 45 packages
  • gnuhealth-client-3.6RC3.tar.gz  : The GH HMIS GTK client
  • gnuhealth-client-plugins-3.6RC1.tar.gz : The Federation Resource Locator; the GNU Health Camera and the crypto plugin. Note that There have been no changes on the plugins

Remember that all the components of the 3.6 series run in Python 3

You can download the RC tarballs from the development dir:

https://www.gnuhealth.org/downloads/development/unstable/

There is a new section on the Wikibook for the GH Hackers.

https://en.wikibooks.org/wiki/GNU_Health/Developer's_corner

Please check it before you install it. It contains important information on dependencies and other installation instructions.

Happy and healthy hacking !

Luis

Categories: FLOSS Project Planets

Gunnar Wolf: Made with Creative Commons ⇒ Hecho con Creative Commons. To the printer!

Planet Debian - Wed, 2019-11-06 18:09

I am very happy to tell you that, around 2.5 years after starting the translation project, today we sent to the presses the Spanish translation for Creative Commons' book, Made with Creative Commons!

This has been quite a feat, on many fronts — Social, technical, organizational. Of course, the book is freely redistributable, and you can get it at IIEc-UNAM's repository.

As we are producing this book from DocBook sources, we will also be publishing an EPUB version. Only... We need to clear some processes first (i.e. having the right department approve it, get a matching ISBN record, etc.) and will probably only be done by early next year. Of course, you can clone our git repository and build it at home :-]

Of course, I cannot celebrate until the boxes of brand new books land in my greedy hands... But it will happen soon™.

Categories: FLOSS Project Planets

Erik Marsja: How to Handle Coroutines with asyncio in Python

Planet Python - Wed, 2019-11-06 18:08

The post How to Handle Coroutines with asyncio in Python appeared first on Erik Marsja.

When a program becomes very long and complex, it is convenient to divide it into subroutines, each of which implements a specific task. However, subroutines cannot be executed independently, but only at the request of the main program, which is responsible for coordinating the use of subroutines.

In this post, we introduce a generalization of the concept of subroutines, known as coroutines: just like subroutines, coroutines compute a single computational step, but unlike subroutines, there is no main program to coordinate the results. The coroutines link themselves together to form a pipeline without any supervising function responsible for calling them in a particular order. 

This post is taken from the book Python Parallel Programming Cookbook  (2nd Ed.) by Giancarlo Zaccone. In this book, you will implement effective programming techniques in Python to build scalable software that saves time and memory. 

In a coroutine, the execution point can be suspended and resumed later, since the coroutine keeps track of the state of execution. Having a pool of coroutines, it is possible to interleave the computations: the first one runs until it yields control back, then the second runs and goes on down the line.

Read Also: Introducing Coil, an open-source Android image loading library backed by Kotlin Coroutines

The interleaving is managed by the event loop. It keeps track of all the coroutines and schedules when they will be executed.

Other important aspects of coroutines are as follows:

  • Coroutines allow for multiple entry points that can yield multiple times.
  • Coroutines can transfer execution to any other coroutine.

The term yield is used here to describe a coroutine pausing and passing the control flow to another coroutine.

Getting ready to work with coroutines

We will use the following notation to work with coroutines:

import asyncio @asyncio.coroutine def coroutine_function(function_arguments): ............ DO_SOMETHING ............

Coroutines use the yield from syntax introduced in PEP 380 (read more at https://www.python.org/dev/peps/pep-0380/) to stop the execution of the current computation and suspends the coroutine’s internal state.

In particular, in the case of yield from future, the coroutine is suspended until future is done, then the result of future will be propagated (or raise an exception); in the case of yield from coroutine, the coroutine waits for another coroutine to produce a result that will be propagated (or raise an exception).

As we shall see in the next example, in which the coroutines will be used to simulate a finite state machine, we will use the yield from coroutine notation.

More on coroutines with asyncio are available at https://docs.python.org/3.5/library/asyncio-task.html.

Using coroutines to simulate a finite state machine

In this example, we see how to use coroutines to simulate a finite state machine with five states.

finite state machine or finite state automaton is a mathematical model that is widely used in engineering disciplines, but also in sciences such as mathematics and computer science.

The automaton that we want to simulate the behavior of using coroutines is as follows:

The states of the system are S0S1S2S3, and S4, with 0 and 1: the values for which the automaton can pass from one state to the next state (this operation is called a transition). So, for example, state S0 can pass to state S1, but only for the value 1, and S0 can pass to state S2, but only for the value 0.

The following Python code simulates a transition of the automaton from state S0 (the start state), up to state S4 (the end state):

1) The first step is obviously to import the relevant libraries:

import asyncio import time from random import randint

2) Then, we define the coroutine relative to start_state. The input_value parameter is evaluated randomly; it can be 0 or 1. If it is 0, then the control goes to coroutinestate2; otherwise, it changes to coroutine state1:

@asyncio.coroutine def start_state(): print('Start State called\n') input_value = randint(0, 1) time.sleep(1) if input_value == 0: result = yield from state2(input_value) else: result = yield from state1(input_value) print('Resume of the Transition:\nStart State calling'+ result)

3) Here is the coroutine for state1. The input_value parameter is evaluated randomly; it can be 0 or 1. If it is 0, then the control goes tostate2; otherwise, it changes to state1:

@asyncio.coroutine def state1(transition_value): output_value ='State 1 with transition value = %s\n'% \ transition_value input_value = randint(0, 1) time.sleep(1) print('...evaluating...') if input_value == 0: result = yield from state3(input_value) else: result = yield from state2(input_value) return output_value + 'State 1 calling %s' % result

4) The coroutine for state1 has the transition_value argument that allowed the passage of the state. Also, in this case, input_value is randomly evaluated. If it is 0, then the state transitions to state3; otherwise, the control changes to state2:

@asyncio.coroutine def state2(transition_value): output_value = 'State 2 with transition value = %s\n' %\ transition_value input_value = randint(0, 1) time.sleep(1) print('...evaluating...') if input_value == 0: result = yield from state1(input_value) else: result = yield from state3(input_value) return output_value + 'State 2 calling %s' % result

5) The coroutine for state3 has the transition_value argument, which allowed the passage of the state. input_value is randomly evaluated. If it is 0, then the state transitions to state1; otherwise, the control changes to end_state:

@asyncio.coroutine def state3(transition_value): output_value = 'State 3 with transition value = %s\n' %\ transition_value input_value = randint(0, 1) time.sleep(1) print('...evaluating...') if input_value == 0: result = yield from state1(input_value) else: result = yield from end_state(input_value) return output_value + 'State 3 calling %s' % result

end_state prints out the transition_value argument, which allowed the passage of the state, and then stops the computation:

@asyncio.coroutine def end_state(transition_value): output_value = 'End State with transition value = %s\n'%\ transition_value print('...stop computation...') return output_value

7) In the __main__ function, the event loop is acquired, and then we start the simulation of the finite state machine, calling the automaton’s start_state:

if __name__ == '__main__': print('Finite State Machine simulation with Asyncio Coroutine') loop = asyncio.get_event_loop() loop.run_until_complete(start_state()) How coroutines simulate a finite state machine

Each state of the automaton has been defined by using the decorator:

@asyncio.coroutine

For example, state S0 is defined here:

@asyncio.coroutine def StartState(): print ("Start State called \n") input_value = randint(0,1) time.sleep(1) if (input_value == 0): result = yield from State2(input_value) else : result = yield from State1(input_value)

The transition to the next state is determined by input_value, which is defined by the randint(0,1) function of Python’s random module. This function randomly provides a value of 0 or 1.

In this manner, randintrandomly determines the state to which the finite state machine will pass:

input_value = randint(0,1)

After determining the values to pass, the coroutine calls the next coroutine using the yield from command:

if (input_value == 0): result = yield from State2(input_value) else : result = yield from State1(input_value)

The result variable is the value that each coroutine returns. It is a string, and, at the end of the computation, we can reconstruct the transition from the initial state of the automaton, start_state, up to end_state.

The main program starts the evaluation inside the event loop:

if __name__ == "__main__": print("Finite State Machine simulation with Asyncio Coroutine") loop = asyncio.get_event_loop() loop.run_until_complete(StartState())

Running the code, we have an output like this:

Finite State Machine simulation with Asyncio Coroutine Start State called ...evaluating... ...evaluating... ...evaluating... ...evaluating... ...stop computation... Resume of the Transition : Start State calling State 1 with transition value = 1 State 1 calling State 2 with transition value = 1 State 2 calling State 1 with transition value = 0 State 1 calling State 3 with transition value = 0 State 3 calling End State with transition value = 1 Handling coroutines with asyncio in Python 3.5

Before Python 3.5 was released, the asyncio module used generators to mimic asynchronous calls and, therefore, had a different syntax than the current version of Python 3.5.

Python 3.5 introduced the async and await keywords. Notice the lack of parentheses around the await func() call.

The following is an example of “Hello, world!“, using asyncio with the new syntax introduced by Python 3.5+:

import asyncio async def main(): print(await func()) async def func(): # Do time intensive stuff... return "Hello, world!" if __name__ == "__main__": loop = asyncio.get_event_loop() loop.run_until_complete(main())

In this post, we learned how to handle coroutines with asyncio. To learn more features of asynchronous programming in Python, you may go through the book Python Parallel Programming Cookbook  (2nd Ed.) by Packt Publishing.

The post How to Handle Coroutines with asyncio in Python appeared first on Erik Marsja.

Categories: FLOSS Project Planets

Reuven Lerner: Podcasts, podcasts, and even more podcasts

Planet Python - Wed, 2019-11-06 14:02

I’ve recently appeared on a whole bunch of podcasts about Python, freelancing, and even (believe it or not) learning Chinese! If you’re interested in any or all of these subjects, then you might want to catch my interviews:

  • Talk Python to Me: I spoke with Michael Kennedy (and Casey Kinsen) about freelancing in Python — and things to consider when you’re thinking of freelancing.
  • Programming Leadership: I spoke with Marcus Blankenship about why companies offer training to their employees, how they should look for training, and how best to take advantage of a course.
  • Profitable Python: I spoke with Ben McNeill about the world of Python training — how training works (for me, companies that invite me to train, and the people in my courses), how to build up an online business, and the difference between B2C vs. B2B. You can watch the video on YouTube, or listen to the audio version of the podcast!
  • Teaching Python: I spoke with Kelly Paredes and Sean Tibor about what it’s like to teach adults vs. children, and what tricks I use to help keep my students engaged. I learned quite a bit about how they teach Python to middle-school students!
  • You Can Learn Chinese: I’ve been studying Chinese for a few years, and spent some time chatting with Jared Turner about my experience, how I continue to improve, and how my Chinese studies have affected my work teaching Python. The entire episode is great, and my interview starts about halfway through.

In related news, you might know that I’ve been a co-panelist on the Freelancers Show podcast for the last few years. The entire panel (including me) recently left the show, and we’re currently discussing how/when/where we’ll restart.

I’ll be sure to post to my blog here when there are updates — but if you’re a freelancer of any level (new or experienced) who might be interested in sharing your stories with us, please contact me, so we can speak with you when we re-start in our new format.

The post Podcasts, podcasts, and even more podcasts appeared first on Reuven Lerner.

Categories: FLOSS Project Planets

Reproducible Builds: Reproducible Builds in October 2019

Planet Debian - Wed, 2019-11-06 13:25

Welcome to the October 2019 report from the Reproducible Builds project. 👌

In our monthly reports we attempt outline the most important things that we have been up to recently. As a reminder on what our little project is all about, whilst anyone can inspect the source code of free software for malicious changes most software is distributed to end users or servers as precompiled binaries. Reproducible builds tries to ensure that no changes have been made during these compilation processes by promising identical results are always generated from a given source, allowing multiple third-parties to come to a consensus on whether a build was compromised.

In this month’s report, we will cover:

  • Media coverage & conferencesReproducible builds in Belfast & science
  • Reproducible Builds Summit 2019Registration & attendees, etc.
  • Distribution workThe latest work in Debian, OpenWrt, openSUSE, and more…
  • Software developmentMore diffoscope development, etc.
  • Getting in touchHow to contribute & get in touch

If you are interested in contributing to our venture, please visit our Contribute page on our website.

Media coverage & conferences

Jonathan McDowell gave an introduction on Reproducible Builds in Debian at the Belfast Linux User Group:

Whilst not strictly related to reproducible builds, Sean Gallagher from Ars Technica wrote an article entitled Researchers find bug in Python script may have affected hundreds of studies:

A programming error in a set of Python scripts commonly used for computational analysis of chemistry data returned varying results based on which operating system they were run on.

Reproducible Builds Summit 2019

Registration for our fifth annual Reproducible Builds summit that will take place between the 1st and 8th December in Marrakesh, Morocco has opened and invitations have been sent out.

Similar to previous incarnations of the event, the heart of the workshop will be three days of moderated sessions with surrounding “hacking” days and will include a huge diversity of participants from Arch Linux, coreboot, Debian, F-Droid, GNU Guix, Google, Huawei, in-toto, MirageOS, NYU, openSUSE, OpenWrt, Tails, Tor Project and many more. We are still seeking additional sponsorship for the event. Sponsoring enables us to enable the attendance of people who would not otherwise be able to attend. If you or your company would be able to sponsor the event, please contact info@reproducible-builds.org.

If you would like to learn more about the event and how to register, please visit our our dedicated event page.

Distribution work

GNU Guix announced that they had significantly reduced the size of their “bootstrap seed” by replacing binutils, GCC and glibc with smaller alternatives resulting in the package manager “possessing a formal description of how to build all underlying software” in a reproducible way from a mere 120MB seed.

OpenWrt is a Linux-based operating system targeting wireless network routers and other embedded devices. This month Paul Spooren (aparcar) posted a patch to their mailing list adding KCFLAGS to the kernel build flags to make it easier to rebuild the official binaries.

Bernhard M. Wiedemann posted his monthly Reproducible Builds status update for the openSUSE distribution which describes how rpm was updated to run most builds with the -flto=auto argument, saving mirror disk space/bandwidth. In addition, maven-javadoc-plugin received a toolchain patch (originating from Debian) in order to normalise a date.

Debian

In Debian this month Didier Raboud (OdyX) started a discussion on the debian-devel mailing list regarding building Debian source packages in a reproducible manner (thread index). In addition, Lukas Pühringer prepared an upload of in-toto, a framework to protect supply chain integrity by the Secure Systems Lab at New York University which was uploaded by Holger Levsen.

Holger Levsen started a new section on the Debian wiki to centralise to document the progress made on various Debian-specific reproducibility issues and noticed that the “essential” package set in the bullseye distribution became unreproducible again, likely due to a a bug in Perl itself. Holger also restarted a discussion on Debian bug #774415 which requests that the devscripts collection of utilities that “make the life of a Debian package maintainer easier” adds a script/wrapper to enable easier end-user testing of whether a package is reproducible.

Johannes Schauer (josch) explained that their mmdebstrap tool can create bit-for-bit identical Debian chroots of the unstable and buster distributions for both the essential and minbase bootstrap “variants”, and Bernhard M. Wiedemann contributed to a discussion regarding adding a “global” build switch to enable/disable Profile-Guided Optimisation (PGO) and Link-time optimisation in the dpkg-buildflags tool, nothing that “overall it is still very hard to get reproducible builds with PGO enabled.”

64 reviews of Debian packages were added, 10 were updated and 35 were removed this month adding to our knowledge about identified issues. Three new types were added by Chris Lamb (lamby): nondeterministic_output_in_code_generated_by_ros_genpy, nondeterministic_ordering_in_include_graphs_generated_by_doxygen & nondeterministic_defaults_in_documentation_generated_by_python_traitlets.

Lastly, there was a far-reaching discussion regarding the correctness and suitability of setting the TZ environment variable to UTC when it was noted that the value UTC0 was “technically” more correct.

Software development Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Lastly, a request from Steven Engler to sort fields in the PKG-INFO files generated by the setuptools Python module build utilities was resolved by Jason R. Coombs and Vagrant Cascadian added SOURCE_DATE_EPOCH support to LTSP’s manual page generation.

strip-nondeterminism & reprotest

strip-nondeterminism is our tool to remove specific non-deterministic results from successful builds. This month, Chris Lamb made a number of changes including uploading version 1.6.1-1 was to Debian unstable. This dropped the bug_803503.zip test fixture as it is no longer compatible with the latest version of Perl’s Archive::Zip module (#940973).

reprotest is our end-user tool to build same source code twice in widely differing environments and then checks the binaries produced by each build for any differences. This month, Iñaki Malerba updated our Salsa CI scripts [] as well as adding a --control-build parameter []. Holger Levsen uploaded the package as 0.7.10, bumping the Debian “standards version” to 4.4.1 [].

diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. It is run countless times a day on our testing infrastructure and is essential for identifying fixes and causes of non-deterministic behaviour.

This month, Chris Lamb (lamby) made the following changes, including uploading versions 126, 127, 128 and 129 to the Debian unstable distribution:

  • Disassembling and reporting on files related to the R (programming language):

    • Expose an .rdb file’s absolute paths in the semantic/human-readable output, not hidden deep in a hexdump. []
    • Rework and refactor the handling of .rdb files with respect to locating the parallel .rdx prior to inspecting the file to ensure that we do not add files to the user’s filesystem in the case of directly comparing two .rdb files or — worse — overwriting a file in is place. []
    • Query the container for the full path of the parallel .rdx file to the .rdb file as well as looking in the same directory. This ensures that comparing two Debian packages shows any varying path. []
    • Correct the matching of .rds files by also detecting newer versions of this file format. []
    • Don’t read the site and user environment when comparing .rdx, .rdb or .rds files by using Rscript’s --vanilla option. [][]
    • Ensure all object names are displayed, including ones beginning with a fullstop (.) [] and sort package fields when dumping data from .rdb files [].
    • Mask/hide standard error when processing .rdb files [] and don’t include useless/misleading NULL when dumping data from them. []
    • Format package contents as foo = bar rather than using ugly and misleading brackets, etc. [] and include the object’s type [].
    • Don’t pass our long script to parse .rdb files via the command line; use standard input instead. []
    • Call the deparse function to ensure that we do not error out and revert to a binary diff when processing .rdb files with internal “vector” types; they do not automatically coerce to strings. []
    • Other misc/cosmetic changes. [][][]
  • Output/logging:
    • When printing an error from a command, format the command for the user. []
    • Truncate very long command lines when displaying them as an external source of data. []
    • When formatting command lines ensure newlines and other metacharacters appear escaped as \n, etc. [][]
    • When displaying the standard error from commands, ensure we use the escaped version. []
    • Use “exit code” over “return code” terminology when referring to UNIX error codes in displayed differences. []
  • Internal API:
    • Add ability to pass bytestring input to external commands. []
    • Split out command-line formatting into a separate utility function. []
    • Add support for easily masking the standard error of commands. [][]
    • To match the libarchive container, raise a KeyError exception if we request an invalid member from a directory. []
    • Correct string representation output in the traceback when we cannot locate a specific item in a container. []
  • Misc:
    • Move build-dependency on python-argcomplete to its Python 3 equivalent to facilitate Python 2.x removal. (#942967)
    • Track and report on missing Python modules. (#72)
    • Move from deprecated $ADTTMP to $AUTOPKGTEST_TMP in the autopkgtests. []
    • Truncate the tcpdump expected diff to 8KB (from ~600KB). []
    • Try and ensure that new test data files are generated dynamically, ie. at least no new ones are added without “good” reasons. []
    • Drop unused BASE_DIR global in the tests. []

In addition, Mattia Rizzolo updated our tests to run against all supported Python versions [] and to exit with a UNIX exit status of 2 instead of 1 in case of running out of disk space []. Lastly Vagrant Cascadian updated diffoscope 126 and 129 in GNU Guix, and updated inputs for additional test suite coverage.

trydiffoscope is the web-based version of diffoscope and this month Chris Lamb migrated the tool to depend on the python3-docutils package over python-docutils to allow for Python 2.x removal (#943293) as well as updating the packaging to the latest Debian standards and conventions [][][].

Project website

There was yet more effort put into our our website this month, including Chris Lamb improving the formatting of reports  [][][][][] and tidying the new “Testing framework” links [], etc.

In addition, Holger Levsen add the Tor Project’s Reproducible Builds Manager to our “Who is Involved?” page and Mattia Rizzolo dropped a literal <br> HTML element [].

Test framework

We operate a comprehensive Jenkins-based testing framework that powers tests.reproducible-builds.org. This month, the following changes were made:

  • Holger Levsen:
    • Debian-specific changes:
      • Add a script to ease powercycling x86 and arm64 nodes. [][]
      • Don’t create suite-based directories for buildinfos.debian.net. []
      • Make all four suites being tested shown in a single row on the performance page. []
    • OpenWrt changes:
      • Only run jobs every third day. []
      • Create jobs to run the reproducible_openwrt_rebuild.py script today and in the future. []
  • Mattia Rizzolo:
    • Add some packages that were lost while updating to buster. []
    • Fix the auto-offline functionality by checking the content of the permalinks file instead of following the lastSuccessfulBuild that no longer being updated. []
  • Paul Spooren (OpenWrt):
    • Add a reproducible_common utilities file. []
    • Update the openwrt-rebuild script to to use schroot. []
    • Use unbuffered Python output [] as well as fixing newlines [][]

The usual node maintenance was performed by Holger Levsen [][], Mattia Rizzolo [][][] and Vagrant Cascadian [][][].

Getting in touch

If you are interested in contributing the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:


This month’s report was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levesen and Vagrant Cascadian. It was subsequently reviewed by a bunch of Reproducible Builds folks on IRC and the mailing list.

Categories: FLOSS Project Planets

Lullabot: Custom Layout Options in Drupal 8

Planet Drupal - Wed, 2019-11-06 12:52

Like most CMS products that provide structured content modeling tools, Drupal excels at building template-driven pages with consistent layouts. A blog post might use one template and a staff bio another, but as long as the basic shape of the page is consistent, Drupal's traditional content modeling and page composition tools are usually enough. When it comes time to build special content, though—departmental portal pages, landing pages for ephemeral marketing campaigns, editorial curated topic hubs, and so on—Drupal's combination of templates and structured content can fall short.

Categories: FLOSS Project Planets

Pages