Feeds

Django Weekly: Django Weekly 59: TDD, React, Admin Panel, Good Django Books and more

Planet Python - Mon, 2017-10-16 07:50
Worthy Read
GoCD - open source continuous delivery serverGoCD is a continuous delivery tool specializing in advanced workflow modeling and dependency management. It lets you track a change from commit to deploy at a glance, providing superior visibility into your workflow. It’s open source, free to use and download.
advert
Django: TDD and Unit TestingHow can you write tests, to test your code if you don’t have any code in place?
TDD
Modern Django: Part 1: Setting up Django and ReactThis will be a multi part tutorial series on how to create a "Modern" web application or SPA using Django and React.js.
reactjs
Scaling Django Admin Date Hierarchy - By Haki BenitaCurator's Note - Every article Haki Benita writes is a gem. We published a package called django-admin-lightweight-date-hierarchy which overrides Django Admin date_hierarchy template tag and eliminates all database queries from it. For the implementation details and the shocking performance analysis read on.
admin
HelloSign eSign APIEmbed docs directly on your website with a few lines of code. Test the API for free.
advert
Run unlimited experiments on 1 single Digital Ocean droplet with Nginx, Gunicorn and Django.Run multiple instances of Django Apps from same instance.
Digital Ocean
Django Test Driven Development with PytestWe made the decision that our own website and all the next upcoming projects will be built using the Test Driven Development method. We need to be “world class” and follow the best practices used by professional devs in the wild.
django, TDD, PyTest
Nested Relationships in Serializers for OneToOne fields in Django Rest FrameworkDjango Rest Framework (DRF) is one of the effectively written frameworks around django and helps build REST APIs for an application backend. I am using it in one of my personal projects and stumbled upon this challenge of ‘serializing a model which is referencing another model via OneToOne field’.
DRF
What are good django books?Reddit Question.
books
ToptalWe help companies like Airbnb, Pfizer, and Artsy find great developers. Let us find your next great hire. Get started today.
advert
Understanding Routers in Django-Rest-FrameworkAdvantages using ViewSets and Routers over traditional views
DRF
Django REST framework 3.7 releasedThe 3.7 release focuses on improvements to schema generation and the interactive API documentation.
DRF
A Complete Beginner's Guide to Django - Unified pageSingle page / Index of the entire tutorial.
core-django
Implementing faceted search with Django and PostgreSQLI’ve added a faceted search engine to this blog, powered by PostgreSQL. It supports regular text search (proper search, not just SQL"like" queries), filter by tag, filter by date, filter by content type (entries vs blogmarks vs quotation) and any combination of the above.
postgres
Counting calls to Django QuerySet methodsorm
An Ajax UX Pattern for Creating, Updating, and Ordering Itemsajax, UI
Django 1.11: SignalsDjango provides what we call Signal Dispatcher which will help or allow decoupled applications within your project get notified when an action occur elsewhere. Curator's note - I dislike signals. :-p To each their own.
signals
Why your Django models are fat?models
Bootstrap v4 // Intro & Django IntegrationBootstrap v4 // Intro & Django Integration
bootstrap
How to understand Django models the simple wayTutorial on Django models
django
How to Consolidate Multiple Django Projects – Easy as Pythondjango
My essential django package listIn this article I’d like to present a list of django packages (add-ons) that I use in most of my projects. I am using django for more than 5 years as my day to day work tool to develop applications for the public sector organization I work for. So please keep in mind that these packages are targeted to the “enterpripse” audience and some things that target public (open access) web apps may be missing.
packages
Disabling Error Emails in DjangoOne of Django's nice "batteries included" features is the ability to send emails when an error is encountered. This is a great feature for small sites where minor problems would otherwise go unnoticed. Once your site start getting lots of traffic, however, the feature turns into a liability. An error might fire off thousands of emails in rapid succession. Not only will this put extra load on your web servers, but you could also take down (or get banned from) your email server in the process.
email package
Custom Django User Model Video Tutorialvideos
JSON web token based authentication in Django – Jyoti Gautam – MediumA brief description of how the JWT authentication is implemented in Django.
JWT

Jobs
(Senior) Python/Django Developer (32-40 uur), at HFMtalentindexAmsterdam, NetherlandsAls Full Stack Developer ga je ons Development team versterken met jouw kennis en ervaring. In de rol van (Senior) Python/Django Developer ga je in eerste instantie bouwen aan een nieuwe versie van ons online assessment platform.


Projects
CSRF-tutorial - 17 Stars, 2 ForkUse Django To Introduce CSRF and Cookies , Session.
django-safe-filefield - 0 Stars, 0 ForkSecure file field, which allows you to restrict uploaded file extensions.
Categories: FLOSS Project Planets

FSF Events: Richard Stallman - "El software libre en la ética y en la práctica" (Gómez Palacio, Mexico)

GNU Planet! - Mon, 2017-10-16 07:05
Richard Stallman hablará sobre las metas y la filosofía del movimiento del Software Libre, y el estado y la historia del sistema operativo GNU, el cual junto con el núcleo Linux, es actualmente utilizado por decenas de millones de personas en todo el mundo.

Esa charla de Richard Stallman formará parte del XXIX Congreso Internacional de Ingeniería, Ciencias y Arquitectura (2017-10-23–26) no será técnica y será abierta al público; todos están invitados a asistir.

Lugar: Av. Hidalgo, 1280 Felipe Ángeles, 35000 Gómez Palacio, Durango, México

Favor de rellenar este formulario, para que podamos contactarle acerca de eventos futuros en la región de Durango.

Categories: FLOSS Project Planets

Interview with Cillian Clifford

Planet KDE - Mon, 2017-10-16 05:54
Could you tell us something about yourself?

Hi everyone – my name is Cillian Clifford, I’m a 21 year old hobbyist artist and electronic musician, and an occasional animator, writer and game developer. I go by the username of Fatal-Exit online. I live in rural Ireland, a strange place for someone so interested in technology. My interests range from creative projects to tech related fields like engineering, robotics and science. Outside of things like these I enjoy gaming from time to time.

Do you paint professionally, as a hobby artist, or both?

Definitely as a hobby. I consider digital painting to be one of my weakest areas of art skills, so I spend a lot of time trying to improve it. Other areas of digital art I’m interested in include CAD, 3d modeling, digital sculpting, vector animation, and pixel art.

What genre(s) do you work in?

It varies! Hugely, in fact. Over the past two years on my current DeviantArt account I’ve uploaded game fan-art paintings, original fantasy and Sci-Fi pieces, landscapes, pixel art, and renders of 3d pieces. I also occasionally paint textures and UV maps for 3d artwork. Outside of still art, I also animate in vector and pixel art styles. I also occasionally make not-great indie games, but as you might guess, most never get finished.

Whose work inspires you most — who are your role models as an artist?

A wide range of artists, often not particular people but more their combined efforts on projects. I will say that David Revoy and GDQuest in the Krita community are a big inspiration. Youtube artists such as Sycra, Jazza and Borodante are another few I can think of. Lots of my favorite art of all time has come from large game companies such as Blizzard and Hi-Rez Studios. Also game related, the recent rise of more retro and pixel based graphics in indie games is a huge interest of mine, and games like Terraria, Stardew Valley and Hyper-Light Drifter have an art style that truly inspires me.

How and when did you get to try digital painting for the first time?

My first time doing some sort of “digital painting” was when I was about 16-17. I did the graphics design work for a board game a team of us were working on for a school enterprise project, using the free graphics software Paint.net and a mouse. It took ages. However the project ended up taking off and we ended up in the final stage of the competition. After that was over (we didn’t win) I decided digital art might be something to seriously invest in and bought a graphics tablet. For a couple of years I made unimaginably terrible art and in 2015 I decided to shut down my DeviantArt account and start fresh on a new account, with my new style. This was about when I found Krita, I believe.

What makes you choose digital over traditional painting?

A few things: Firstly, I could never paint in a traditional sense, I was absolutely terrible. At school I was considered a C grade artist, and that was even when working on pen and ink drawings, a style I used to be good at but have since abandoned. I never learned to paint traditionally.

Secondly, I can do it anywhere. In my bedroom with a Ugee graphics monitor and my workstation desktop, or lots of other places if I take my aging laptop and Huion graphics tablet with me. Soon I’m looking to buy a mobile tablet similar to the Microsoft Surface Pro, that’ll let me paint absolutely anywhere.

Thirdly, the tech involved. So not only am I able to emulate any media that exists in traditional art with various software, I can also work on art styles that aren’t even possible with traditional. As well as this, functions like undo, zooming in and out of the canvas, layers and blending modes, gradients and bucket fill, the list goes on and on.

I can happily say I never want to “go back” to traditional painting even though I was never any good at it in the first place.

How did you find out about Krita?

That’s a hard question. I’m not absolutely sure, but I’ve an idea that it might have been through David Revoy’s work on the Blender Foundation movies, and Pepper and Carrot. I was looking for a cheap or free piece of software because I didn’t want to use cracked Photoshop/Painter, and I’d already used GIMP and Paint.net, and neither were good for the art I was looking to create. I tried MyPaint but it never worked properly with my tablet. I did buy ArtRage at some point but I wasn’t happy with the tools in that. It came down to probably a choice of Krita or Clip Studio Paint. Krita had the price tag of free so it was the first one I tried. And I stuck with it.

What was your first impression?

Wow.

At least I think it was. When I first tried it everything just seemed to work straight off. It seemed simple enough for me to use efficiently. And the brush engine was simply amazing. I don’t know if there’s any other program with brushes that easy to customize to a huge extent but still so simple to set up. I first tried it in version 2.something so it was before animation was added.

What do you love about Krita?

Mostly, the fact that it works to use for most things you can throw at it. I’ve made game assets, textures, paintings, drawings, pixel art, a couple of test animations with the animation function, pretty much everything. I feel like it’s the Blender of 2d, the free tool that does pretty much everything, maybe not the 100% best at it, but certainly the most economical option.

The brush engine like I said before is one of it’s best assets, it has one of the most useful color pickers I’ve used, the inclusion of what is the feature-set of the paid plugin Lazy Nezumi for Photoshop for free, the fact that the interface can be there when you need it but vanish at the press of the button. Just loads of good things.

The variety of brush packs made by the community are also a great asset. I own GDQuest’s premium bundle and also use Deevad’s pack on a regular basis. I love to then tweak those brushes to suit my needs.

What do you think needs improvement in Krita? Is there anything that really annoys you?

The main current annoyance with Krita is the text tool. I just hate it. It’s the one thing that makes me want to have access to Photoshop. And I know it’s supposedly one of the things being focused on in future updates, so hopefully they don’t take too long to happen.

Another problem I had with Krita happened last year. It’s been fixed since, but it’s certainly nothing I’d like to see happen again with V4 (Which I worry is a possibility). Basically what happened was when the Krita 3 update came out it broke support for my Ugee graphics monitor. Completely broke it. I had to either stick with the old version of Krita 2.9, or when I wanted to use tools from V3 I had to uninstall my screen tablet drivers, install drivers for my tiny old Intuos Small tablet and use that. Luckily, later on, (about 6-8 months down the line) an update for my tablet drivers fixed all problems, and it just worked with my screen tablet from then on.

What sets Krita apart from the other tools that you use?

Ease of use, the brush engine, the speed that it works at (even with 4k documents on my pentium powered laptop), the way it currently works well on all my hardware, the price tag (FREE!), the community, and some great providers of custom brushes (GDQuest and David Revoy’s in particular). Even though I’ve since stopped using Krita for pixel art and moved to Aseprite (only because their pixel animation tools are more sophisticated towards making game assets), I believe it’s the most suitable program I have access to for digital painting, comic art, and traditional 2d animation.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

This is a hard question because I feel I am a terrible critic. If I had to choose it’d probably be Sailing to the Edge of the World II – from my Sailing to the Edge of the world painting series I made for a good colleague of mine. I also included the latest painting in that series, though I believe the second one was the best. Even though it’s been maybe 8 months since I made that painting it’s still one of my best.

What techniques and brushes did you use in it?

If I remember correctly I used mostly David Revoy’s brush-pack. The painterly brushes were used along with the pen and ink brushes and some of the airbrushes. To be honest it’s been so long since I made it I’m not 100% sure. I may have also used some of the default brushes such as the basic round and soft round.

Where can people see more of your work?

My DeviantArt(where I post the majority of my art):
https://fatal-exit.deviantart.com/
My twitter (where I post some of my art):
https://twitter.com/FatalExit
And my newest place: Tumblr (Not much here at all):
https://www.tumblr.com/blog/fatalexit

Anything else you’d like to share?

I’m working on resurrecting my Youtube channel at:
https://www.youtube.com/channel/UCdnUE1bkY2suvUvZhIj9rIQ

As of the time of writing this it’s mostly just home to my music. However I’m looking to expand it into art, animation and game development, with tutorials and process videos. I’m certainly hoping to post some Krita reviews, tutorials and videos on how it can be used in a game development pipeline over the coming months, as well as videos of other software such as Blender, Aseprite, 3d Coat, Moho, Construct 3, Gamemaker Studio 2, Unreal Engine 4, Sunvox, FL Studio Mobile and others.

Categories: FLOSS Project Planets

Matt Glaman: Why and How for SSLs and your website

Planet Drupal - Mon, 2017-10-16 05:00
Why and How for SSLs and your website mglaman Mon, 10/16/2017 - 04:00 Secure sites. HTTPS and SSL. A topic more and more site owners and maintainers are having to work with. For some, this is a great thing and others it is either nerve-wracking or confusing. Luckily, for us all, getting an SSL and implementing full site HTTPS is becoming easier.
Categories: FLOSS Project Planets

Iain R. Learmonth: No more no surprises

Planet Debian - Mon, 2017-10-16 04:00

Debian has generally always had, as a rule, “sane defaults” and “no surprises”. This was completely shattered for me when Vim decided to hijack the mouse from my terminal and break all copy/paste functionality. This has occured since the release of Debian 9.

I expect for my terminal to behave consistently, and this is broken every time I log in to a Debian 9 system where I have not configured Vim to disable this functionality. I also see I’m not alone in this frustration.

To fix this, in your .vimrc:

if !has("gui_running") set mouse= endif

(This will check to see if your using GVim or similar, where it would be reasonable to expect the mouse to work.)

This is perhaps not aggresive enough though. I never want to have console applications trying to use the mouse. I’ve configured rxvt to do things like open URLs in Firefox, etc. that I always want to work, and I always want my local clipboard to be used so I can copy/paste between remote machines.

I’ve found a small patch that would appear to disable mouse reporting for rxvt, but unfortunately I cannot do this through an Xresources option. If someone is looking for something to do for Hacktoberfest, I’d love to see this be an option for rxvt without re-compiling:

diff --git a/src/rxvt.h b/src/rxvt.h index 5c7cf66..2751ba3 100644 --- a/src/rxvt.h +++ b/src/rxvt.h @@ -646,7 +646,7 @@ enum { #define PrivMode_ExtMouseRight (1UL<<24) // xterm pseudo-utf-8, but works in non-utf-8-locales #define PrivMode_BlinkingCursor (1UL<<25) -#define PrivMode_mouse_report (PrivMode_MouseX10|PrivMode_MouseX11|PrivMode_MouseBtnEvent|PrivMode_MouseAnyEvent) +#define PrivMode_mouse_report 0 /* (PrivMode_MouseX10|PrivMode_MouseX11|PrivMode_MouseBtnEvent|PrivMode_MouseAnyEvent) */ #ifdef ALLOW_132_MODE # define PrivMode_Default (PrivMode_Autowrap|PrivMode_ShiftKeys|PrivMode_VisibleCursor|PrivMode_132OK)
Categories: FLOSS Project Planets

PreviousNext: Update to Drupal core 8.4, a step by step guide

Planet Drupal - Mon, 2017-10-16 02:24
Share:

Drupal 8.4 is stable! With 8.3 coming to end of life, it's important to update your projects to the latest and greatest. This blog will guide you through upgrading from Drupal core 8.3 to 8.4 while avoiding those nasty and confusing composer dependency errors.

by Adam Bramley / 16 October 2017

The main issues with the upgrade to Drupal core 8.3 are dependency conflicts between Drush and Drupal core. The main conflict being that both Drush 8.1.x and Drupal 8.3 use the 2.x version of Symfony libraries, while Drupal 8.4 has been updated to use Symfony 3.x. This means that when using composer to update Drupal core alone, composer will complain about conflicts in dependencies, since Drush depends on Symfony 2.x

Updating your libraries

Note: If you are using Drush 8.1.15 you will not have these issues as it is now compatible with both Symfony 2.x and 3.x

However, if you are using Drush < 8.1.15 (which a lot of people will be on), running the following command will give you a dependency conflict:

composer update drupal/core --with-dependencies

Resulting in an error message, followed by a composer trace:

Your requirements could not be resolved to an installable set of packages.

The best way to fix this is to update both Drupal core and Drush at the same time. Drush 8.x is not compatible with Drupal 8.4 so you will need to update to Drush 9.x.

composer update drupal/core drush/drush --with-dependencies
composer require "drush/drush:~9.0"

Some people have reported success with simply running a require on both updated versions of Drupal and Drush at the same time, but this did not work for me

composer require "drupal/core:~8.4" "drush/drush:~9.0"

What next?

Great, you're on the latest versions of both core and drush, but what's next? Well, that depends on a lot of things like what contributed and custom modules your project is running, how you're deploying your site, and what automated tests you are running. As I can't possibly cover all bases, I'll go through the main issues we encountered.

First things first, you'll need to get your site's database and configuration updated. I highly recommend running your database update hooks and exporting your site's configuration before proceeding any further.

Next, you'll want to ensure that all of your deployment tools are still working. Here at PreviousNext our CI/CD tools call Make commands which are essentially just wrappers around one or more Drush commands.

For the most part, the core Drush commands (that is, the commands that ship with drush) continued working as expected, with a couple of small caveats:

1. You can no longer pipe a SQL dump into the drush sql-cli (sqlc) command.

Previously, we had:
drush sqlc < /path/to/db.sql
Now we have:
`eval drush sql-connect` < /path/to/db.sql

Note: As of Drush 9.0-beta7 this has now been fixed, meaning the old version will work again!

2. The drush --root option no longer works with relative paths

Previously, our make commands all ran Drush with the --root (or -r) option relative to the repository root:
./bin/drush -r ./app some-command
Now it must be an absolute path, or Drush will complain about not being able to find the Drupal settings:
./bin/drush -r /path/to/app some-command

3. Custom Drush commands

For custom Drush commands, you will need to port them to use the new object oriented style approach and put the command into a dedicated module. Since version 9.0-beta5, Drush has dropped support for the old drush.inc style approach that could be used to add commands to a site without adding a new module.

For an example on this, take a look at our drush_cmi_tools library which provides some great extensions for importing and exporting config. This PR shows how we ported these commands to the new Drush 9 format.

For more information on porting commands to Drush 9, check out Moshe Weitzman's blog on it.

Other gotchas

Following the Drush upgrades, your project will need various other updates based on the modules and libraries it uses. I'll detail some issues I faced when updating the Transport for NSW site below.

1. Stale bundles in the bundle field map key value collection

Added as part of this issue, views now throws warnings similar to "A non-existent config entity name returned by FieldStorageConfigInterface::getBundles(): field name: field_dates, bundle: page" for fields that are in the entity bundle field field map that no longer exist on the site. We had a handful of these fields which threw warnings on every cache clear. To fix this, simply add an update hook which clears out these stale fields from the entity.definitions.bundle_field_map keyvalue collection:

/** * Fix entity.definitions.bundle_field_map key store with old bundles. */ function my_module_update_8001() { /** @var \Drupal\Core\KeyValueStore\KeyValueFactoryInterface $key_value_factory */ $key_value_factory = \Drupal::service('keyvalue'); $field_map_kv_store = $key_value_factory->get('entity.definitions.bundle_field_map'); $node_map = $field_map_kv_store->get('node'); // Remove the field_dates field from the bundle field map for the page bundle. unset($node_map['field_dates']['bundles']['page']); $field_map_kv_store->set('node', $node_map); }

2. Custom entities with external uri relationships throw Fatal errors when delete while menu_link_content is installed

The menu_link_content module now has an entity_predelete hook that looks through an entities uri relationships and tries to find any menu links that link to that specific route, and if so deletes them. When the uri is external, an error is thrown when it tries to get the route name "External URLs do not have an internal route name.". See this issue for more information.

3. Tests that submit a modal dialog window will need to be altered

This is a very edge case issue, but will hopefully help someone! In older versions of jQuery UI, the buttons that were added to the bottom of the modal form for submission had an inner span tag which could be clicked as part of a test. For example, in Linkit's LinkitDialogTest. This span no longer exists, and attempting to "click" any other part of that button in a similar way will throw an error in PhantomJS. To get around that simply change your test to do something similar to the following:

$this->click('.ui-dialog button:contains("Save")');

Kudos to jhedstrom for finding this one. See this issue for more information.

Conclusion

Personally, I found the upgrade to be quite tedious for a minor version upgrade. Thankfully, our project has a large suite of functional/end-to-end tests which really helped tease out the issues and gave us greater confidence that the site was still functioning well post-upgrade. Let me know in the comments what issues you're facing!

Finally, take a look at Lee's blog on some of the major changes in 8.4 for some more insight into what you might need to fix.

Tagged Composer, Drupal 8, drush

Posted by Adam Bramley
Senior Drupal Developer

Dated 16 October 2017

Add new comment
Categories: FLOSS Project Planets

Love Huria: Time to level up Code Reviews

Planet Drupal - Mon, 2017-10-16 02:00

Being part of a Code review process is very important for us and trust me we take it very seriously. This is required not just for the team but for an individual learning as well.

Code reviews are very crucial for knowledge transfer and to avoid making small/common mistakes and of course maintaining best practices throughout the dev team. So Let’s take my team for example: we are around 11 developers in the team, all producing code which needs to be reviewed. So basically yeah that’s a whole lot of code!

Why It’s Important?

Pushing code to production is...

Categories: FLOSS Project Planets

Russ Allbery: Free software log (September 2017)

Planet Debian - Mon, 2017-10-16 00:47

I said that I was going to start writing these regularly, so I'm going to stick to it, even when the results are rather underwhelming. One of the goals is to make the time for more free software work, and I do better at doing things that I record.

The only piece of free software work for September was that I made rra-c-util compile cleanly with the Clang static analyzer. This was fairly tedious work that mostly involved unconfusing the compiler or converting (semi-intentional) crashes into explicit asserts, but it unblocks using the Clang static analyzer as part of the automated test suite of my other projects that are downstream of rra-c-util.

One of the semantic changes I made was that the vector utilities in rra-c-util (which maintain a resizable array of strings) now always allocate room for at least one string pointer. This wastes a small amount of memory for empty vectors that are never used, but ensures that the strings struct member is always valid. This isn't, strictly speaking, a correctness fix, since all the checks were correct, but after some thought, I decided that humans might have the same problem that the static analyzer had. It's a lot easier to reason about a field that's never NULL. Similarly, the replacement function for a missing reallocarray now does an allocation of size 1 if given a size of 0, just to avoid edge case behavior. (I'm sure the behavior of a realloc with size 0 is defined somewhere in the C standard, but if I have to look it up, I'd rather not make a human reason about it.)

I started on, but didn't finish, making rra-c-util compile without Clang warnings (at least for a chosen set of warnings). By far the hardest problem here are the Clang warnings for comparisons between unsigned and signed integers. In theory, I like this warning, since it's the cause of a lot of very obscure bugs. In practice, gah does C ever do this all over the place, and it's incredibly painful to avoid. (One of the biggest offenders is write, which returns a ssize_t that you almost always want to compare against a size_t.) I did a bunch of mechanical work, but I now have a lot of bits of code like:

if (status < 0) return; written = (size_t) status; if (written < avail) buffer->left += written;

which is ugly and unsatisfying. And I also have a ton of casts, such as with:

buffer_resize(buffer, (size_t) st.st_size + used);

since st.st_size is an off_t, which may be signed. This is all deeply unsatisfying and ugly, and I think it makes the code moderately harder to read, but I do think the warning will potentially catch bugs and even security issues.

I'm still torn. Maybe I can find some nice macros or programming styles to avoid the worst of this problem. It definitely requires more thought, rather than just committing this huge mechanical change with lots of ugly code.

Mostly, this kind of nonsense makes me want to stop working on C code and go finish learning Rust....

Anyway, apart from work, the biggest thing I managed to do last month that was vaguely related to free software was upgrading my personal servers to stretch (finally). That mostly went okay; only a few things made it unnecessarily exciting.

The first was that one of my systems had a very tiny / partition that was too small to hold the downloaded debs for the upgrade, so I had to resize it (VM disk, partition, and file system), and that was a bit exciting because it has an old-style DOS partition table that isn't aligned (hmmm, which is probably why disk I/O is so slow on those VMs), so I had to use the obsolete fdisk -c=dos mode because I wasn't up for replacing the partition right then.

The second was that my first try at an upgrade died with a segfault during the libc6 postinst and then every executable segfaulted. A mild panic and a rescue disk later (and thirty minutes and a lot of swearing), I tracked the problem down to libc6-xen. Nothing in the dependency structure between jessie and stretch forces libc6-xen to be upgraded in lockstep or removed, but it's earlier in the search path. So ld.so gets upgraded, and then finds the old libc6 from the libc6-xen package, and the mismatch causes immediate segfaults. A chroot dpkg --purge from the rescue disk solved the problem as soon as I knew what was going on, but that was a stressful half-hour.

The third problem was something I should have known was going to be an issue: an old Perl program that does some internal stuff for one of the services I ran had a defined @array test that has been warning for eons and that I never fixed. That became a full syntax error with the most recent Perl, and then I fixed it incorrectly the first time and had a bunch of trouble tracking down what I'd broken. All sorted out now, and everything is happily running stretch. (ejabberd, which other folks had mentioned was a problem, went completely smoothly, although I suspect I now have too many of the plugin packages installed and should do a purging.)

Categories: FLOSS Project Planets

Lullabot: Behind the Screens with Chris Teitzel

Planet Drupal - Mon, 2017-10-16 00:00
Chris Teitzel of Cellar Door Media gives us a preview of Security Saturday at BadCamp 2017 and provides some great tips for securing your website. He tells us why we should always say yes to the community; you never know where it's going to lead. Chris also shares some amazing stories about bringing a Drupal-based communications tool developed from the DrupalCon Denver Tropo Hackathon, to Haiti in 2012 to help with relief efforts after their devastating 2010 earthquake.
Categories: FLOSS Project Planets

Bay Area Drupal Camp: BADCamp 2017 starts this Wednesday

Planet Drupal - Sun, 2017-10-15 23:00
BADCamp 2017 starts this Wednesday Anne Sun, 10/15/2017 - 8:00pm

BADCamp kicks off this Wednesday! We are looking forward to seeing you and are excited to share some logistical details and tips for making the most of your time at BADCamp.

Where do I register and pick up my badge?

Central BADCamp registration opens at 8:15 am each morning. It’s located in the Martin Luther King (MLK) Student Union, on the 3rd Floor in the Kerr Lobby.

Map to Martin Luther King Student Union

2495 Bancroft Way, at Telegraph Avenue

University of California

Berkeley CA 94720

 

If you are attending a summit at the Marsh Art Center, badges will be available for pick up when you arrive.

Map to Marsh Art Center

2120 Allston Way

Berkeley, CA 94704

 

Be sure to come back to BADCamp Expo Hall at MLK Pauley West during breaks. We’ll have coffee, pinball, 15-min relaxation massages and a chance to thank our generous sponsors ... many are hiring!


Here is an overview of what is happening at each venue.

 

Where is everything? Where do I go?
  • Take a look at our Event Timeline to find out what is happening when.

  • Check out the Venues to see what is happening where.

  • Be sure to log in and make your session schedule in advance and then follow along on your mobile device.

 

What’s the 411 on food and beverage?

As always, BADCamp will provide an endless supply of coffee, tea, and water.

 

Wednesday & Thursday

  • All Training & Summits will have light snacks in the morning.

  • For lunch, head outside to discover some of Berkeley’s best food!

  • Stop by the Sponsor Expo on Thursday for specialty coffees.

 

Friday & Saturday

  • The Sponsor Expo will have a waffle bar and specialty coffees.

  • Lunch is sponsored by Acquia on both Friday & Saturday.

 

Parking

Parking at Berkeley can be extremely challenging. Consider taking public transportation whenever possible.  

 

Anything else to know?
  • Wear good shoes! You will do a lot of walking.

  • Bring layers, or donate at the $100 level and get not only an awesome 2017 t-shirt, a solar charger, and a cozy BADCamp hoodie!

  • The Fires. We are keeping an eye on things and will provide any updates if the air quality or anything else impact the event. Stay in touch with BADamp on Twitter.

  • The BADCamp Contribution Lounge is open 24 hours, beginning at 9 am on Wednesday and going until 10 pm on Saturday. We welcome and encourage you to participate!

 

Sponsors

Our sponsors make the magic of BADCamp possible! Stop by to thank them at the event. As an added bonus, many of them are hiring! We’re also sending an extra big virtual hug to Platform.sh, Pantheon & Acquia for sponsoring at the Core level and helping to keep BADCamp AWESOME!

Drupal Planet
Categories: FLOSS Project Planets

Norbert Preining: Fixing vim in Debian

Planet Debian - Sun, 2017-10-15 21:18

I was wondering for quite some time why on my server vim behaves so stupid with respect to the mouse: Jumping around, copy and paste wasn’t possible the usual way. All this despite having

set mouse=

in my /etc/vim/vimrc.local. Finally I found out why, thanks to bug #864074 and fixed it.

The whole mess comes from the fact that, when there is no ~/.vimrc, vim loads defaults.vim after vimrc.local and thus overwriting several settings put in there.

There is a comment (I didn’t see, though) in /etc/vim/vimrc explaining this:

" Vim will load $VIMRUNTIME/defaults.vim if the user does not have a vimrc. " This happens after /etc/vim/vimrc(.local) are loaded, so it will override " any settings in these files. " If you don't want that to happen, uncomment the below line to prevent " defaults.vim from being loaded. " let g:skip_defaults_vim = 1

I agree that this is a good way to setup vim on a normal installation of Vim, but the Debian package could do better. The problem is laid out clearly in the bug report: If there is no ~/.vimrc, settings in /etc/vim/vimrc.local are overwritten.

This is as counterintuitive as it can be in Debian – and I don’t know any other package that does it in a similar way.

Since the settings in defaults.vim are quite reasonable, I want to have them, but only fix a few of the items I disagree with, like the mouse. At the end what I did is the following in my /etc/vim/vimrc.local:

if filereadable("/usr/share/vim/vim80/defaults.vim") source /usr/share/vim/vim80/defaults.vim endif " now set the line that the defaults file is not reloaded afterwards! let g:skip_defaults_vim = 1 " turn of mouse set mouse= " other override settings go here

There is probably a better way to get a generic load statement that does not depend on the Vim version, but for now I am fine with that.

Categories: FLOSS Project Planets

Matthew Rocklin: Streaming Dataframes

Planet Python - Sun, 2017-10-15 20:00

This work is supported by Anaconda Inc and the Data Driven Discovery Initiative from the Moore Foundation

This post is about experimental software. This is not ready for public use. All code examples and API in this post are subject to change without warning.

Summary

This post describes a prototype project to handle continuous data sources of tabular data using Pandas and Streamz.

Introduction

Some data never stops. It arrives continuously in a constant, never-ending stream. This happens in financial time series, web server logs, scientific instruments, IoT telemetry, and more. Algorithms to handle this data are slightly different from what you find in libraries like NumPy and Pandas, which assume that they know all of the data up-front. It’s still possible to use NumPy and Pandas, but you need to combine them with some cleverness and keep enough intermediate data around to compute marginal updates when new data comes in.

Example: Streaming Mean

For example, imagine that we have a continuous stream of CSV files arriving and we want to print out the mean of our data over time. Whenever a new CSV file arrives we need to recompute the mean of the entire dataset. If we’re clever we keep around enough state so that we can compute this mean without looking back over the rest of our historical data. We can accomplish this by keeping running totals and running counts as follows:

total = 0 count = 0 for filename in filenames: # filenames is an infinite iterator df = pd.read_csv(filename) total = total + df.sum() count = count + df.count() mean = total / count print(mean)

Now as we add new files to our filenames iterator our code prints out new means that are updated over time. We don’t have a single mean result, we have continuous stream of mean results that are each valid for the data up to that point. Our output data is an infinite stream, just like our input data.

When our computations are linear and straightforward like this a for loop suffices. However when our computations have several streams branching out or converging, possibly with rate limiting or buffering between them, this for-loop approach can grow complex and difficult to manage.

Streamz

A few months ago I pushed a small library called streamz, which handled control flow for pipelines, including linear map operations, operations that accumulated state, branching, joining, as well as back pressure, flow control, feedback, and so on. Streamz was designed to handle all of the movement of data and signaling of computation at the right time. This library was quietly used by a couple of groups and now feels fairly clean and useful.

Streamz was designed to handle the control flow of such a system, but did nothing to help you with streaming algorithms. Over the past week I’ve been building a dataframe module on top of streamz to help with common streaming tabular data situations. This module uses Pandas and implements a subset of the Pandas API, so hopefully it will be easy to use for programmers with existing Python knowledge.

Example: Streaming Mean

Our example above could be written as follows with streamz

source = Stream.filenames('path/to/dir/*.csv') # stream of filenames sdf = (source.map(pd.read_csv) # stream of Pandas dataframes .to_dataframe(example=...)) # logical streaming dataframe sdf.mean().stream.sink(print) # printed stream of mean values

This example is no more clear than the for-loop version. On its own this is probably a worse solution than what we had before, just because it involves new technology. However it starts to become useful in two situations:

  1. You want to do more complex streaming algorithms

    sdf = sdf[sdf.name == 'Alice'] sdf.x.groupby(sdf.y).mean().sink(print) # or sdf.x.rolling('300ms').mean()

    It would require more cleverness to build these algorithms with a for loop as above.

  2. You want to do multiple operations, deal with flow control, etc..

    sdf.mean().sink(print) sdf.x.sum().rate_limit(0.500).sink(write_to_database) ...

    Consistently branching off computations, routing data correctly, and handling time can all be challenging to accomplish consistently.

Jupyter Integration and Streaming Outputs

During development we’ve found it very useful to have live updating outputs in Jupyter.

Usually when we evaluate code in Jupyter we have static inputs and static outputs:

However now both our inputs and our outputs are live:

We accomplish this using a combination of ipywidgets and Bokeh plots both of which provide nice hooks to change previous Jupyter outputs and work well with the Tornado IOLoop (streamz, Bokeh, Jupyter, and Dask all use Tornado for concurrency). We’re able to build nicely responsive feedback whenever things change.

In the following example we build our CSV to dataframe pipeline that updates whenever new files appear in a directory. Whenever we drag files to the data directory we see that all of our outputs update.

What is supported?

This project is very young and could use some help. There are plenty of holes in the API. That being said, the following works well:

Elementwise operations:

sdf['z'] = sdf.x + sdf.y sdf = sdf[sdf.z > 2]

Simple reductions:

sdf.sum() sdf.x.mean()

Groupby reductions:

sdf.groupby(sdf.x).y.mean()

Rolling reductions by number of rows or time window

sdf.rolling(20).x.mean() sdf.rolling('100ms').x.quantile(0.9)

Real time plotting with Bokeh (one of my favorite features)

sdf.plot()

What’s missing?
  1. Parallel computing: The core streamz library has an optional Dask backend for parallel computing. I haven’t yet made any attempt to attach this to the dataframe implementation.
  2. Data ingestion from common streaming sources like Kafka. We’re in the process now of building asynchronous-aware wrappers around Kafka Python client libraries, so this is likely to come soon.
  3. Out-of-order data access: soon after parallel data ingestion (like reading from multiple Kafka partitions at once) we’ll need to figure out how to handle out-of-order data access. This is doable, but will take some effort. This is where more mature libraries like Flink are quite strong.
  4. Performance: Some of the operations above (particularly rolling operations) do involve non-trivial copying, especially with larger windows. We’re relying heavily on the Pandas library which wasn’t designed with rapidly changing data in mind. Hopefully future iterations of Pandas (Arrow/libpandas/Pandas 2.0?) will make this more efficient.
  5. Filled out API: Many common operations (like variance) haven’t yet been implemented. Some of this is due to laziness and some is due to wanting to find the right algorithm.
  6. Robust plotting: Currently this works well for numeric data with a timeseries index but not so well for other data.

But most importantly this needs use by people with real problems to help us understand what here is valuable and what is unpleasant.

Help would be welcome with any of this.

You can install this from github

pip install git+https://github.com/mrocklin/streamz.git

Documentation and code are here:

Current work

Current and upcoming work is focused on data ingestion from Kafka and parallelizing with Dask.

Categories: FLOSS Project Planets

Simple is Better Than Complex: A Complete Beginner's Guide to Django - Part 7

Planet Python - Sun, 2017-10-15 20:00
Introduction

Welcome to the last part of our tutorial series! In this tutorial, we are going to deploy our Django application to a production server. We are also going to configure an Email service and HTTPS certificates for our servers.

At first, I thought about given an example using a Virtual Private Server (VPS), which is more generic and then using one Platform as a Service such as Heroku. But it was too much detail, so I ended up creating this tutorial focused on VPSs.

Our project is live! If you want to check online before you go through the text, this is the application we are going to deploy: www.djangoboards.com.

Version Control

Version control is an extremely important topic in software development. Especially when working with teams and maintaining production code at the same time, several features are being developed in parallel. No matter if it’s a one developer project or a multiple developers project, every project should use version control.

There are several options of version control systems out there. Perhaps because of the popularity of GitHub, Git become the de facto standard in version control. So if you are not familiar version control, Git is a good place to start. There are many tutorials, courses, and resources in general so that it’s easy to find help.

GitHub and Code School have a great interactive tutorial about Git, which I used years ago when I started moving from SVN to Git. It’s a very good introduction.

This is such an important topic that I probably should have brought it up since the first tutorial. But the truth is I wanted the focus of this tutorial series to be on Django. If all this is new for you, don’t worry. It’s important to take one step at a time. Your first project won’t be perfect. It’s important to keep learning and evolving your skills slowly but with constancy.

A very good thing about Git is that it’s much more than just a version control system. There’s a rich ecosystem of tools and services built around it. Some good examples are continuous integration, deployment, code review, code quality, and project management.

Using Git to support the deployment process of Django projects works very well. It’s a convenient way to pull the latest version from the source code repository or to rollback to a specific version in case of a problem. There are many services that integrate with Git so to automate test execution and deployment for example.

If you don’t have Git installed on your local machine, grab the installed from https://git-scm.com/downloads.

Basic Setup

First thing, set your identity:

git config --global user.name "Vitor Freitas" git config --global user.email vitor@simpleisbetterthancomplex.com

In the project root (the same directory as manage.py is), initialize a git repository:

git init Initialized empty Git repository in /Users/vitorfs/Development/myproject/.git/

Check the status of the repository:

git status On branch master Initial commit Untracked files: (use "git add <file>..." to include in what will be committed) accounts/ boards/ manage.py myproject/ requirements.txt static/ templates/ nothing added to commit but untracked files present (use "git add" to track)

Before we proceed in adding the source files, create a new file named .gitignore in the project root. This special file will help us keep the repository clean, without unnecessary files like cache files or logs for example.

You can grab a generic .gitignore file for Python projects from GitHub.

Make sure to rename it from Python.gitignore to just .gitignore (the dot is important!).

You can complement the .gitignore file telling it to ignore SQLite database files for example:

.gitignore

__pycache__/ *.py[cod] .env venv/ # SQLite database files *.sqlite3

Now add the files to the repository:

git add .

Notice the dot here. The command above is telling Git to add all untracked files within the current directory.

Now make the first commit:

git commit -m "Initial commit"

Always write a comment telling what this commit is about, briefly describing what have you changed.

Remote Repository

Now let’s setup GitHub as a remote repository. First, create a free account on GitHub, then confirm your email address. After that, you will be able to create public repositories.

For now, just pick a name for the repository, don’t initialize it with a README, or add a .gitignore or add a license so far. Make sure you start the repository empty:

After you create the repository you should see something like this:

Now let’s configure it as our remote repository:

git remote add origin git@github.com:sibtc/django-boards.git

Now push the code to the remote server, that is, to the GitHub repository:

git push origin master Counting objects: 84, done. Delta compression using up to 4 threads. Compressing objects: 100% (81/81), done. Writing objects: 100% (84/84), 319.70 KiB | 0 bytes/s, done. Total 84 (delta 10), reused 0 (delta 0) remote: Resolving deltas: 100% (10/10), done. To git@github.com:sibtc/django-boards.git * [new branch] master -> master

I create this repository just to demonstrate the process to create a remote repository with an existing code base. The source code of the project is officially hosted in this repository: https://github.com/sibtc/django-beginners-guide.

Project Settings

No matter if the code is stored in a public or private remote repository, sensitive information should never be committed and pushed to the remote repository. That includes secret keys, passwords, API keys, etc.

At this point, we have to deal with two specific types of configuration in our settings.py module:

  • Sensitive information such as keys and passwords;
  • Configurations that are specific to a given environment.

Passwords and keys can be stored in environment variables or using local files (not committed to the remote repository):

# environment variables import os SECRET_KEY = os.environ['SECRET_KEY'] # or local files with open('/etc/secret_key.txt') as f: SECRET_KEY = f.read().strip()

For that, there’s a great utility library called Python Decouple that I use in every single Django project I develop. It will search for a local file named .env to set the configuration variables and will fall back to the environment variables. It also provides an interface to define default values, transform the data into int, bool, and list when applicable.

It’s not mandatory, but I really find it a very useful tool. And it works like a charm with services like Heroku.

First, let’s install it:

pip install python-decouple

myproject/settings.py

from decouple import config SECRET_KEY = config('SECRET_KEY')

Now we can place the sensitive information in a special file named .env (notice the dot in front) in the same directory where the manage.py file is:

myproject/ |-- myproject/ | |-- accounts/ | |-- boards/ | |-- myproject/ | |-- static/ | |-- templates/ | |-- .env <-- here! | |-- .gitignore | |-- db.sqlite3 | +-- manage.py +-- venv/

.env

SECRET_KEY=rqr_cjv4igscyu8&&(0ce(=sy=f2)p=f_wn&@0xsp7m$@!kp=d

The .env file is ignored in the .gitignore file, so every time we are going to deploy the application or run in a different machine, we will have to create a .env file and add the necessary configuration.

Now let’s install another library to help us write the database connection in a single line. This way it’s easier to write different database connection strings in different environments:

pip install dj-database-url

For now, all the configurations we will need to decouple:

myproject/settings.py

from decouple import config, Csv import dj_database_url SECRET_KEY = config('SECRET_KEY') DEBUG = config('DEBUG', default=False, cast=bool) ALLOWED_HOSTS = config('ALLOWED_HOSTS', cast=Csv()) DATABASES = { 'default': dj_database_url.config( default=config('DATABASE_URL') ) }

Example of a .env file for our local machine:

SECRET_KEY=rqr_cjv4igscyu8&&(0ce(=sy=f2)p=f_wn&@0xsp7m$@!kp=d DEBUG=True ALLOWED_HOSTS=.localhost,127.0.0.1

Notice that in the DEBUG configuration we have a default, so in production we can ignore this configuration because it will be set to False automatically, as it is supposed to be.

Now the ALLOWED_HOSTS will be transformed into a list like ['.localhost', '127.0.0.1'. ]. Now, this is on our local machine, for production we will set it to something like ['.djangoboards.com', ] or whatever domain you have.

This particular configuration makes sure your application is only served to this domain.

Tracking Requirements

It’s a good practice to keep track of the project’s dependencies, so to be easier to install it on another machine.

We can check the currently installed Python libraries by running the command:

pip freeze dj-database-url==0.4.2 Django==1.11.6 django-widget-tweaks==1.4.1 Markdown==2.6.9 python-decouple==3.1 pytz==2017.2

Create a file named requirements.txt in the project root, and add the dependencies there:

requirements.txt

dj-database-url==0.4.2 Django==1.11.6 django-widget-tweaks==1.4.1 Markdown==2.6.9 python-decouple==3.1

I kept the pytz==2017.2 out, because it is automatically installed by Django.

You can update your source code repository:

git add . git commit -m "Add requirements.txt file" git push origin master Domain Name

If we are going to deploy a Django application properly, we will need a domain name. It’s important to have a domain name to serve the application, configure an email service and configure an https certificate.

Lately, I’ve been using Namecheap a lot. You can get a .com domain for $8.88/year, or if you are just trying things out, you could register a .xyz domain for $0.99/year.

Anyway, you are free to use any registrar. To demonstrate the deployment process, I registered the www.DjangoBoards.com domain.

Deployment Strategy

Here is an overview of the deployment strategy we are going to use in this tutorial:

The cloud is our Virtual Private Server provided by Digital Ocean. You can sign up to Digital Ocean using my affiliate link to get a free $10 credit (only valid for new accounts).

Upfront we will have NGINX, illustrated by the ogre. NGINX will receive all requests to the server. But it won’t try to do anything smart if the request data. All it is going to do is decide if the requested information is a static asset that he can serve by himself, or if it’s something more complicated. If so, it will pass the request to Gunicorn.

The NGINX will also be configured with HTTPS certificates. Meaning it will only accept requests via HTTPS. If the client tries to request via HTTP, NGINX will first redirect the user to the HTTPS, and only then it will decide what to do with the request.

We are also going to install this certbot to automatically renew the Let’s Encrypt certificates.

Gunicorn is an application server. Depending on the number of processors the server has, it can spawn multiple workers to process multiple requests in parallel. It manages the workload and executes the Python and Django code.

Django is the one doing the hard work. It may access the database (PostgreSQL) or the file system. But for the most part, the work is done inside the views, rendering templates, all those things that we’ve been coding for the past weeks. After Django process the request, it returns a response to Gunicorn, who returns the result to NGINX that will finally deliver the response to the client.

We are also going to install PostgreSQL, a production quality database system. Because of Django’s ORM system, it’s easy to switch databases.

The last step is to install Supervisor. It’s a process control system and it will keep an eye on Gunicorn and Django to make sure everything runs smoothly. If the server restarts, or if Gunicorn crashes, it will automatically restart it.

Deploying to a VPS (Digital Ocean)

You may use any other VPS (Virtual Private Server) you like. The configuration should be very similar, after all, we are going to use Ubuntu 16.04 as our server.

First, let’s create a new server (on Digital Ocean they call it “Droplet”). Select Ubuntu 16.04:

Pick the size. The smallest droplet is enough:

Then choose a hostname for your droplet (in my case “django-boards”):

If you have an SSH key, you can add it to your account. Then you will be able to log in the server using it. Otherwise, they will email you the root password.

Now pick the server’s IP address:

Before we log in to the server, let’s point our domain name to this IP address. This will save some time because DNS settings usually take a few minutes to propagate.

So here we added two A records, one pointing to the naked domain “djangoboards.com” and the other one for “www.djangoboards.com”. We will use NGINX to configure a canonical URL.

Now let’s log in to the server using your terminal:

ssh root@45.55.144.54 root@45.55.144.54's password:

Then you should see the following message:

You are required to change your password immediately (root enforced) Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-93-generic x86_64) * Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com * Support: https://ubuntu.com/advantage Get cloud support with Ubuntu Advantage Cloud Guest: http://www.ubuntu.com/business/services/cloud 0 packages can be updated. 0 updates are security updates. Last login: Sun Oct 15 18:39:21 2017 from 82.128.188.51 Changing password for root. (current) UNIX password:

Set the new password, and let’s start to configure the server.

sudo apt-get update sudo apt-get -y upgrade

If you get any prompt during the upgrade, select the option “keep the local version currently installed”.

Python 3.6

sudo add-apt-repository ppa:deadsnakes/ppa sudo apt-get update sudo apt-get install python3.6

PostgreSQL

sudo apt-get -y install postgresql postgresql-contrib

NGINX

sudo apt-get -y install nginx

Supervisor

sudo apt-get -y install supervisor sudo systemctl enable supervisor sudo systemctl start supervisor

Virtualenv

wget https://bootstrap.pypa.io/get-pip.py sudo python3.6 get-pip.py sudo pip3.6 install virtualenv Application User

Create a new user with the command below:

adduser boards

Usually, I just pick the name of the application. Enter a password and optionally add some extra info to the prompt.

Now add the user to the sudoers list:

gpasswd -a boards sudo PostgreSQL Database Setup

First switch to the postgres user:

sudo su - postgres

Create a database user:

createuser u_boards

Create a new database and set the user as the owner:

createdb django_boards --owner u_boards

Define a strong password for the user:

psql -c "ALTER USER u_boards WITH PASSWORD 'BcAZoYWsJbvE7RMgBPzxOCexPRVAq'"

We can now exit the postgres user:

exit Django Project Setup

Switch to the application user:

sudo su - boards

First, we can check where we are:

pwd /home/boards

First, let’s clone the repository with our code:

git clone https://github.com/sibtc/django-beginners-guide.git

Start a virtual environment:

virtualenv venv -p python3.6

Initialize the virtualenv:

source venv/bin/activate

Install the requirements:

pip install -r django-beginners-guide/requirements.txt

We will have to add two extra libraries here, the Gunicorn and the PostgreSQL driver:

pip install gunicorn pip install psycopg2

Now inside the /home/boards/django-beginners-guide folder, let’s create a .env file to store the database credentials, the secret key and everything else:

/home/boards/django-beginners-guide/.env

SECRET_KEY=rqr_cjv4igscyu8&&(0ce(=sy=f2)p=f_wn&@0xsp7m$@!kp=d ALLOWED_HOSTS=.djangoboards.com DATABASE_URL=postgres://u_boards:BcAZoYWsJbvE7RMgBPzxOCexPRVAq@localhost:5432/django_boards

Here is the syntax of the database URL: postgres://db_user:db_password@db_host:db_port/db_name.

Now let’s migrate the database, collect the static files and create a super user:

cd django-beginners-guide python manage.py migrate Operations to perform: Apply all migrations: admin, auth, boards, contenttypes, sessions Running migrations: Applying contenttypes.0001_initial... OK Applying auth.0001_initial... OK Applying admin.0001_initial... OK Applying admin.0002_logentry_remove_auto_add... OK Applying contenttypes.0002_remove_content_type_name... OK Applying auth.0002_alter_permission_name_max_length... OK Applying auth.0003_alter_user_email_max_length... OK Applying auth.0004_alter_user_username_opts... OK Applying auth.0005_alter_user_last_login_null... OK Applying auth.0006_require_contenttypes_0002... OK Applying auth.0007_alter_validators_add_error_messages... OK Applying auth.0008_alter_user_username_max_length... OK Applying boards.0001_initial... OK Applying boards.0002_auto_20170917_1618... OK Applying boards.0003_topic_views... OK Applying sessions.0001_initial... OK

Now the static files:

python manage.py collectstatic Copying '/home/boards/django-beginners-guide/static/js/jquery-3.2.1.min.js' Copying '/home/boards/django-beginners-guide/static/js/popper.min.js' Copying '/home/boards/django-beginners-guide/static/js/bootstrap.min.js' Copying '/home/boards/django-beginners-guide/static/js/simplemde.min.js' Copying '/home/boards/django-beginners-guide/static/css/app.css' Copying '/home/boards/django-beginners-guide/static/css/bootstrap.min.css' Copying '/home/boards/django-beginners-guide/static/css/accounts.css' Copying '/home/boards/django-beginners-guide/static/css/simplemde.min.css' Copying '/home/boards/django-beginners-guide/static/img/avatar.svg' Copying '/home/boards/django-beginners-guide/static/img/shattered.png' ...

This command copy all the static assets to an external directory where NGINX can serve the files for us. More on that later.

Now create a super user for the application:

python manage.py createsuperuser Configuring Gunicorn

So, Gunicorn is the one responsible for executing the Django code behind a proxy server.

Create a new file named gunicorn_start inside /home/boards:

#!/bin/bash NAME="django_boards" DIR=/home/boards/django-beginners-guide USER=boards GROUP=boards WORKERS=3 BIND=unix:/home/boards/run/gunicorn.sock DJANGO_SETTINGS_MODULE=myproject.settings DJANGO_WSGI_MODULE=myproject.wsgi LOG_LEVEL=error cd $DIR source ../venv/bin/activate export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE export PYTHONPATH=$DIR:$PYTHONPATH exec ../venv/bin/gunicorn ${DJANGO_WSGI_MODULE}:application \ --name $NAME \ --workers $WORKERS \ --user=$USER \ --group=$GROUP \ --bind=$BIND \ --log-level=$LOG_LEVEL \ --log-file=-

This script will start the application server. We are providing some information such as where the Django project is, which application user to be used to run the server, and so on.

Now make this file executable:

chmod u+x gunicorn_start

Create two empty folders, one for the socket file and one to store the logs:

mkdir run logs

Right now the directory structure inside /home/boards should look like this:

django-beginners-guide/ gunicorn_start logs/ run/ staticfiles/ venv/

The staticfiles folder was created by the collectstatic command.

Configuring Supervisor

First, create an empty log file inside the /home/boards/logs/ folder:

touch logs/gunicorn.log

Now create a new supervisor file:

sudo vim /etc/supervisor/conf.d/boards.conf [program:boards] command=/home/boards/gunicorn_start user=boards autostart=true autorestart=true redirect_stderr=true stdout_logfile=/home/boards/logs/gunicorn.log

Save the file and run the commands below:

sudo supervisorctl reread sudo supervisorctl update

Now check the status:

sudo supervisorctl status boards boards RUNNING pid 308, uptime 0:00:07 Configuring NGINX

Next step is to set up the NGINX server to serve the static files and to pass the requests to Gunicorn:

Add a new configuration file named boards inside /etc/nginx/sites-available/:

upstream app_server { server unix:/home/boards/run/gunicorn.sock fail_timeout=0; } server { listen 80; server_name www.djangoboards.com; # here can also be the IP address of the server keepalive_timeout 5; client_max_body_size 4G; access_log /home/boards/logs/nginx-access.log; error_log /home/boards/logs/nginx-error.log; location /static/ { alias /home/boards/staticfiles/; } # checks for static file, if not found proxy to app location / { try_files $uri @proxy_to_app; } location @proxy_to_app { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://app_server; } }

Create a symbolic link to the sites-enabled folder:

sudo ln -s /etc/nginx/sites-available/boards /etc/nginx/sites-enabled/boards

Remove the default NGINX website:

sudo rm /etc/nginx/sites-enabled/default

Restart the NGINX service:

sudo service nginx restart

At this point, if the DNS have already propagated, the website should be available on the URL www.djangoboards.com.

Configuring an Email Service

One of the best options to get started is Mailgun. It offers a very reliable free plan covering 12,000 emails per month.

Sign up for a free account. Then just follow the steps, it’s very straightforward. You will have to work together with the service you registered your domain. In my case, it was Namecheap.

Click on add domain to add a new domain to your account. Follow the instructions and make sure you use “mg.” subdomain:

Now grab the first set of DNS records, it’s two TXT records:

Add it to your domain, using the web interface offered by your registrar:

Do the same thing with the MX records:

Add them to the domain:

Now this step is not mandatory, but since we are already here, confirm it as well:

After adding all the DNS records, click in the Check DNS Records Now button:

Now we need to have some patience. Sometimes it takes a while to validate the DNS.

Meanwhile, we can configure the application to receive the connection parameters.

myproject/settings.py

EMAIL_BACKEND = config('EMAIL_BACKEND', default='django.core.mail.backends.smtp.EmailBackend') EMAIL_HOST = config('EMAIL_HOST', default='') EMAIL_PORT = config('EMAIL_PORT', default=587, cast=int) EMAIL_HOST_USER = config('EMAIL_HOST_USER', default='') EMAIL_HOST_PASSWORD = config('EMAIL_HOST_PASSWORD', default='') EMAIL_USE_TLS = config('EMAIL_USE_TLS', default=True, cast=bool) DEFAULT_FROM_EMAIL = 'Django Boards <noreply@djangoboards.com>' EMAIL_SUBJECT_PREFIX = '[Django Boards] '

Then, my local machine .env file would look like this:

SECRET_KEY=rqr_cjv4igscyu8&&(0ce(=sy=f2)p=f_wn&@0xsp7m$@!kp=d DEBUG=True ALLOWED_HOSTS=.localhost,127.0.0.1 DATABASE_URL=sqlite:///db.sqlite3 EMAIL_BACKEND=django.core.mail.backends.console.EmailBackend

And my production .env file would look like this:

SECRET_KEY=rqr_cjv4igscyu8&&(0ce(=sy=f2)p=f_wn&@0xsp7m$@!kp=d ALLOWED_HOSTS=.djangoboards.com DATABASE_URL=postgres://u_boards:BcAZoYWsJbvE7RMgBPzxOCexPRVAq@localhost:5432/django_boards EMAIL_HOST=smtp.mailgun.org EMAIL_HOST_USER=postmaster@mg.djangoboards.com EMAIL_HOST_PASSWORD=ED2vmrnGTM1Rdwlhazyhxxcd0F

You can find your credentials in the Domain Information section on Mailgun.

  • EMAIL_HOST: SMTP Hostname
  • EMAIL_HOST_USER: Default SMTP Login
  • EMAIL_HOST_PASSWORD: Default Password

We can test the new settings in the production server. Make the changes in the settings.py file on your local machine, commit the changes to the remote repository. Then, in the server pull the new code and restart the Gunicorn process:

git pull

Edit the .env file with the email credentials.

Then restart the Gunicorn process:

sudo supervisorctl restart boards

Now we can try to start the password reset process:

On the Mailgun dashboard you can have some statistics about the email delivery:

Configuring HTTPS Certificate

Now let’s protect our application with a nice HTTPS certificate provided by Let’s Encrypt.

Setting up HTTPS has never been that easy. And better, we can get it for free nowadays. They provide a solution called certbot which takes care of installing and renewing the certificates for us. It’s very straightforward:

sudo apt-get update sudo apt-get install software-properties-common sudo add-apt-repository ppa:certbot/certbot sudo apt-get update sudo apt-get install python-certbot-nginx

Now install the certs:

sudo certbot --nginx

Just follow the prompts. When asked about:

Please choose whether or not to redirect HTTP traffic to HTTPS, removing HTTP access.

Choose 2 to redirect all HTTP traffic to HTTPS.

With that the site is already being served over HTTPS:

Setup the auto renew of the certs. Run the command below to edit the crontab file:

sudo crontab -e

Add the following line to the end of the file:

0 4 * * * /usr/bin/certbot renew --quiet

This command will run every day at 4 am. All certificates expiring within 30 days will automatically be renewed.

Conclusions

Thanks a lot for all those who followed this tutorial series, giving comments and feedback! I really appreciate! This was the last tutorial of the series. I hope you enjoyed it!

Even though this was the last part of the tutorial series, I plan to write a few follow-up tutorials exploring other interesting topics as well, such as database optimization and adding more features on top of what we have at the moment.

By the way, if you are interested in contributing to the project, few free to submit pull requests! The source code of the project is available on GitHub: https://github.com/sibtc/django-beginners-guide/

And please let me know what else you would like to see next! :-)

← Part 6 - Class-Based Views Tutorial Series Index →
Categories: FLOSS Project Planets

Justin Mason: Links for 2017-10-15

Planet Apache - Sun, 2017-10-15 19:58
Categories: FLOSS Project Planets

Iain R. Learmonth: Free Software Efforts (2017W41)

Planet Debian - Sun, 2017-10-15 18:00

Here’s my weekly report for week 41 of 2017. In this week I have explored some Java 8 features, looked at automatic updates in a few Linux distributions and decided that actually I don’t need swap anymore.

Debian

The issue that was preventing the migration of the Tasktools Packaging Team’s mailing list from Alioth to Savannah has now been resolved.

Ana’s chkservice package that I sponsored last week has been ACCEPTED into unstable and since MIGRATED to testing.

Tor Project

I have produced a patch for the Tor Project website to update links to the Onionoo documentation now this has moved (#23802 ). I’ve updated the Debian and Ubuntu relay configuration instructions to use systemctl instead of service where appropriate (#23048 ).

When a Tor relay is less than 2 years old, an alert will now appear on Atlas to link to the new relay lifecycle blog post (#23767 ). This should hopefully help new relay operators understand why their relay is not immediately fully loaded but instead it takes some time to ramp up.

I have gone through the tickets for Tor Cloud and did not find any tickets that contain any important information that would be useful to someone reviving the project. I have closed out these tickets and the Tor Cloud component no longer has any non-closed tickets (#7763, #8544, #8768, #9064, #9751, #10282, #10637, #11153, #11502, #13391, #14035, #14036, #14073, #15821 ).

I’ve continued to work on turning the Atlas application into an integrated part of Tor Metrics (#23518 ) and you can see some progress here.

Finally, I’ve continued hacking on a Twitter bot to tweet factoids about the public Tor network and you can now enjoy some JavaDoc documentation if you’d like to learn a little about its internals. I am still waiting for a git repository to be created (#23799 ) but will be publishing the sources shortly after that ticket is actioned.

Sustainability

I believe it is important to be clear not only about the work I have already completed but also about the sustainability of this work into the future. I plan to include a short report on the current sustainability of my work in each weekly report.

I have not had any free software related expenses this week. The current funds I have available for equipment, travel and other free software expenses remains £60.52. I do not believe that any hardware I rely on is looking at imminent failure.

I’d like to thank Digital Ocean for providing me with futher credit for their platform to support my open source work.

I do not find it likely that I’ll be travelling to Cambridge for the miniDebConf as the train alone would be around £350 and hotel accomodation a further £600 (to include both me and Ana).

Categories: FLOSS Project Planets

Yasoob Khalid: Weird Comparison Issue in Python

Planet Python - Sun, 2017-10-15 17:31

Hi guys! I am back with a new article. This time I will tackle a problem which seems easy enough at first but will surprize some of you. Suppose you have the following piece of code:

a = 3 b = False c = """12""" d = 4.7

and you have to evaluate this:

d + 2 * a > int(c) == b

Before reading the rest of the post please take a minute to solve this statement in your head and try to come up with the answer.

So while solving it my thought process went something like this:

2 * a = 6 d + 6 = 10.7 10.7 > int(c) is equal to False False == b is equal to True

But lo-and-behold. If we run this code in Python shell we get the following output:

False

Dang! What went wrong there? Was our thinking wrong? I am pretty sure it was supposed to return True. I went through the official docs a couple of times but couldn’t find the answer. There was also a possibility in my mind that this might be some Python 2 bug but when I tried this code in Python 3 I got the same output. Finally, I turned to the Python’s IRC channel which is always full of extremely helpful people. I got my answer from there.

So I got to know that I was chaining comparisons. But I knew that already. What I didn’t know was that whenever you chain comparisons, Python compares each thing in order and then does an “AND”. So our comparison code is equivalent to:

(d + 2*a) > (int(c)) and (int(c)) == (b)

This brings us to the question that whenever you chain comparisons, does Python compares each thing in order and then does an “AND”?

As it turns out this is exactly what Python does: x <comparison> y <comparison> z’ is executed just like ‘x <comparison> y and y <comparison> z’, except ‘y’ is only evaluated once.

I hope you found this article helpful. If you have any questions, comments, suggestions please feel free to reach out to me via email or the comments section below.

 


Categories: FLOSS Project Planets

LibreOffice Conference 2017

Planet KDE - Sun, 2017-10-15 12:00

This week the annual LibreOffice conference was held in Rome and I had the pleasure to attend. The city of Rome is migrating their IT infrastructure to open software and standards and the city council was kind enough to provide the awesome venue for the event, the Campidoglio.

Photo by Simon Phipps

It is always interesting to meet new people from other communities that share the same values we have in KDE. You meet new friends and you get to know another perspective about the things you are doing.

As a bonus point, I also had the pleasure to meet in person with KDE contributors Andreas Kainz, Franklin Weng, Heiko Tzietze and Jos van den Oever. See you all at Akademy next year!

LibreOffice in Plasma 5

Among the speakers, Katarina Behrens from CIB talked about the status of the Qt5 port of the VCL plugin for KDE Plasma. VCL is the toolkit used by LibreOffice to draw the UI of the program, and its plugin-based architecture allows to adapt the UI to the various native toolkits (such as Qt or GTK).

The KDE plugin is currently stuck with Qt4/kdelibs4 and Katarina has been working on porting it to the new Qt5/KF5 stack. The city of Munich is also sponsoring this work, since they will continue to use LibreOffice for at least some years. The main challenge has been getting rid of the legacy X11 code used for drawing the UI. As a result of this task, the new version of the KDE plugin will get proper Wayland and Hi-DPI support.

If you are wondering if this will bring the native Plasma 5 file picker in LibreOffice, the answer is yes! If any developer wants to help reach this milestone, feel free to contact Katarina who will introduce you to what still needs to be done (a lot).

LibreOffice Online

Lastly, I talked with the Collabora people about the issues that KDE faced with LibreOffice Online in our Nextcloud instance. They assured me that the product has been greatly improved with respect to collaborative editing. By the number of talks and speakers about this topic, it is clear that they have been working hard on it.

Our instance was also using a slightly old version of Collabora Online (2.0.7), so they recommended to upgrade to the 2.1.x series (which Ben quickly did). I think that we as community should give another try to LibreOffice Online and report back to the Collabora developers if we still find issues with the tool. As always, that’s the best way to improve FLOSS!

More photos of the event are available in this album.

Categories: FLOSS Project Planets

Kubuntu Artful Aardvark (17.10) initial RC images now available

Planet KDE - Sat, 2017-10-14 22:40

Artful Aardvark (17.10) initial Release Candidate (RC) images are now available for testing. Help us make 17.10 the best release yet!

Note: This is an initial spin of the RC images. It is likely that at least one more rebuild will be done on Monday.

Adam Conrad from the Ubuntu release team list:

Today, I spun up a set of images for everyone with serial 20171015.

Those images are *not* final images (ISO volid and base-files are still
not set to their final values), intentionally, as we had some hiccups
with langpack uploads that are landing just now.

That said, we need as much testing as possible, bugs reported (and, if
you can, fixed), so we can turn around and have slightly more final
images produced on Monday morning. If we get no testing, we get no
fixing, so no time like the present to go bug-hunting.

… Adam

The Kubuntu team will be releasing 17.10 on October 19, 2017.

This is an initial pre-release. Kubuntu RC pre-releases are NOT recommended for:

  • Regular users who are not aware of pre-release issues
  • Anyone who needs a stable system
  • Anyone uncomfortable running a possibly frequently broken system
  • Anyone in a production environment with data or workflows that need to be reliable

Kubuntu pre-releases are recommended for:

  • Regular users who want to help us test by finding, reporting, and/or fixing bugs
  • Kubuntu, KDE, and Qt developers

Getting Kubuntu 17.10 Intial Release Candidate:

To upgrade to Kubuntu 17.10 pre-releases from 17.04, run

sudo do-release-upgrade -d

from a command line.

Download a Bootable image and put it onto a DVD or USB Drive here:

http://iso.qa.ubuntu.com/qatracker/milestones/383/builds (the little CD icon)

See our release notes: https://wiki.ubuntu.com/ArtfulAardvark/Kubuntu

Please report any bugs on Launchpad using the commandline:

ubuntu-bug packagename

Check on IRC channels, Kubuntuforum or the Kubuntu mail lists if you don’t know the package name. Once the bug is reported on Launchpad, please link to it on the qatracker where you got your RC image. Join the community ISO testing party: https://community.ubuntu.com/t/ubuntu-17-10-community-iso-testing/458

KDE bugs (bugs in Plasma or KDE applications) are still filed at https://bugs.kde.org.

Categories: FLOSS Project Planets

Bryan Ruby: Drupal 8.4 Available and Fixes Significant Database Caching Issues

Planet Drupal - Sat, 2017-10-14 21:58
Drupal 8.4 Available and Fixes Significant Database Caching Issues Image Bryan Ruby Sat, 10/14/2017 - 20:58

Your hosting account was found to be causing an overload of MySQL resources. What can you do? Upgrade your Drupal 8 website to Drupal 8.4 or higher.

One of my goals in rebranding my website from CMS Report to socPub was to write diverse articles beyond the topic of content management systems. Yet, here we go again with another CMS related article. The Drupal open source project recently made available Drupal 8.4 and for me this version has been a long time coming as it addresses some long standing frustrations I've had with Drupal 8 from the perspective of a site administrator. While Drupal 8.4 adds some nice new features, I'm just as excited about the bug fixes and performance improvements delivered in this new version of Drupal.

When Drupal 8 was introduced it made significant improvements in how it caches and renders pages. That's great news for websites that use Drupal's built-in caching to speed up delivery of pages or page elements. But there was one unwanted side effect to the cache enhancements, excessive growth of cache tables with tens or hundreds of thousands of entries, and gigabytes in size. For my own website it is not too uncommon to see my database reach 4 GB in size. Let's put it this way, it was no fun to receive a letter from my hosting provider that they weren't too happy of my resource usage. Worse they threatened shutting down my website if I didn't manage the database size better. Just in the nick of time for you and me, Drupal 8.4 delivers a fix to the cache growth by introducing a new default limit of 5000 rows per cache bin.

I'm still playing with this change and I haven't found a lot of documentation, but you can override the default row limit in Drupal's settings.php via the setting "database_cache_max_rows". For my site, the following settings has helped me keep my MySQL database under half a Gigabyte:

$settings['database_cache_max_rows']['default'] = 5000; $settings['database_cache_max_rows']['bins']['page'] = 500; $settings['database_cache_max_rows']['bins']['dynamic_page_cache'] = 500; $settings['database_cache_max_rows']['bins']['render'] = 1000;

For those of you that may not be ready to upgrade to Drupal 8.4 but still need to handle the oversized caching tables today, I had some luck with the Slushi cache module. An additional good summary of similar solutions for Drupal 8 versions prior to 8.4 can be found on Jeff Geerling's blog.

Notable New Features in Drupal 8.4

Of course the purpose of Drupal 8.4 isn't just to address my pet peeve about Drupal caching but also to bring Drupal users a number of new features and improvements. Some of the more significant additions and changes in Drupal that affect me and possibly you include:

Datetime Range

For non-Drupal user I know this is going to sound odd, but despite a number of community approaches there never really been a standard format for expressing a range for date or time commonly used in event and planning calendars. Drupal 8.4 addresses this missing field type with the new core Datetime Range module to support contributed modules like Calendar and shares a consistent API with other Datetime fields. Future releases may improve Views support, usability, Datetime Range field validation, and REST support.

Content Moderation and Workflow

Although I've been a longtime user of Drupal, for a two year period I managed my website on the Agility CMS. One of the benefits of Agility over Drupal were the workflow and moderation tools delivered "out of the box". The ability to moderate content becomes especially important in websites that have multiple authors and editors collaborating together and in need to mark whether the content is a draft, ready for review, in need of revision, ready to publish, etc. With Drupal 8.4 the Workflow modules is now stable and provides the framework to build additional modules such as the much anticipated Content Moderation module. Currently, the new core Content Moderation is considered experimental and beta stable so additional future changes should be expected. Content moderation workflows can now apply to any entity types that support revisions, and numerous usability issues and critical bugs are resolved in this release.

Media Handling

Another long standing issue for me has been how Drupal handles, displays, and allows you to reuses (it doesn't without outside help) those images. Over the years, there has been a host of solutions found via contributed modules but I've often found myself frustrated that support for these modules vary and often compatible versions are not made available until weeks or months after a new major version of Drupal has been released. The new core Media module wants to change this hurdle by providing an API for reusable media entities and references. It is based on the contributed Media Entity module which has become popular in recent years within Drupal's users.

Unfortunately, the core Media module still needs work and is currently marked hidden. In other words Media by default will not appear in Drupal 8.4's module administration page. The module will be displayed to site builders normally once once related user experience issues are resolved in a future release. Although, if you elect to use a contributed module under development that depends on the core Media module it will enable Media automatically for you. Similarly, the REST API and normalizations for Media are not final and support for decoupled applications will be improved in a future release. So while the Media API in available in this version of Drupal, most of us non-developers will need to wait for additional development to see the benefits of this module. 

Additional Information on Drupal 8.4

An overview of Drupal 8.4 can be found at Drupal.org but for a better list of the changes and fixes you'll want to check out the release notes. As always, links to the latest version of Drupal can be found on the project page. I've seen a few strange errors in the logs since updating my site from Drupal 8.3 to 8.4 but nothing significant for me to recommend waiting to install Drupal 8.4. For those that are more cautious, the next bugfix release (8.4.1) is scheduled for November 1, 2017.

Article originally published at socPub.

Disqus Tags Content Management Drupal Planet Drupal Open Source Information System System Administration Story
Categories: FLOSS Project Planets

Norbert Preining: TeX Live Manager: JSON output

Planet Debian - Sat, 2017-10-14 21:32

With the development of TLCockpit continuing, I found the need for and easy exchange format between the TeX Live Manager tlmgr and frontend programs like TLCockpit. Thus, I have implemented JSON output for the tlmgr info command.

While the format is not 100% stable – I might change some thing – I consider it pretty settled. The output of tlmgr info --data json is a JSON array with JSON objects for each package requested (default is to list all).

[ TLPackageObj, TLPackageObj, ... ]

The structure of the JSON object TLPackageObj reflects the internal Perl hash. Guaranteed to be present keys are name (String) and avilable (Boolean). In case the package is available, there are the following further keys sorted by their type:

  • String type: name, shortdesc, longdesc, category, catalogue, containerchecksum, srccontainerchecksum, doccontainerchecksum
  • Number type: revision, runsize, docsize, srcsize, containersize, srccontainersize, doccontainersize
  • Boolean type: available, installed, relocated
  • Array type: runfiles (Strings), docfiles (Strings), srcfiles (Strings), executes (Strings), depends (Strings), postactions (Strings)
  • Object type:
    • binfiles: keys are architecture names, values are arrays of strings (list of binfiles)
    • binsize: keys are architecture names, values or numbers
    • docfiledata: keys are docfile names, values are objects with optional keys details and lang
    • cataloguedata: optional keys aare topics, version, license, ctan, date, values are all strings

A rather long example showing the output for the package latex, formatted with json_pp and having the list of files and the long description shortened:

[ { "installed" : true, "doccontainerchecksum" : "5bdfea6b85c431a0af2abc8f8df160b297ad73f6a324ca88df990f01f24611c9ae80d2f6d12c7b3767308fbe3de3fca3d11664b923ea4080fb13fd056a1d0c3d", "docfiles" : [ "texmf-dist/doc/latex/base/README.txt", .... "texmf-dist/doc/latex/base/webcomp.pdf" ], "containersize" : 163892, "depends" : [ "luatex", "pdftex", "latexconfig", "latex-fonts" ], "runsize" : 414, "relocated" : false, "doccontainersize" : 12812184, "srcsize" : 752, "revision" : 43813, "srcfiles" : [ "texmf-dist/source/latex/base/alltt.dtx", .... "texmf-dist/source/latex/base/utf8ienc.dtx" ], "category" : "Package", "cataloguedata" : { "version" : "2017/01/01 PL1", "topics" : "format", "license" : "lppl1.3", "date" : "2017-01-25 23:33:57 +0100" }, "srccontainerchecksum" : "1d145b567cf48d6ee71582a1f329fe5cf002d6259269a71d2e4a69e6e6bd65abeb92461d31d7137f3803503534282bc0c5546e5d2d1aa2604e896e607c53b041", "postactions" : [], "binsize" : {}, "longdesc" : "LaTeX is a widely-used macro package for TeX, [...]", "srccontainersize" : 516036, "containerchecksum" : "af0ac85f89b7620eb7699c8bca6348f8913352c473af1056b7a90f28567d3f3e21d60be1f44e056107766b1dce8d87d367e7f8a82f777d565a2d4597feb24558", "executes" : [], "binfiles" : {}, "name" : "latex", "catalogue" : null, "docsize" : 3799, "available" : true, "runfiles" : [ "texmf-dist/makeindex/latex/gglo.ist", ... "texmf-dist/tex/latex/base/x2enc.dfu" ], "shortdesc" : "A TeX macro package that defines LaTeX" } ]

What is currently not available via tlmgr info and thus also not via the JSON output is access to virtual TeX Live databases with several member databases (multiple repositories). I am thinking about how to incorporate this information.

These changes are currently available in the tlcritical repository, but will enter proper TeX Live repositories soon.

Using this JSON output I will rewrite the current TLCockpit tlmgr interface to display more complete information.

Categories: FLOSS Project Planets
Syndicate content