FLOSS Project Planets

Ned Batchelder: No PyCon for me this year

Planet Python - Thu, 2017-01-05 22:23

2017 will be different for me in one specific way: I won't be attending PyCon. I've been to ten in a row:

This year, Open edX con is in Madrid two days later after PyCon, actually overlapping with the sprints. I'm not a good enough traveler to do both. Crossing nine timezones is not something to be taken lightly.

I'll miss the usual love-fest at PyCon, but after ten in a row, it should be OK to miss one. I can say that now, but probably in May I will feel like I am missing the party. Maybe I really will watch talks on video for a change.

I usually would be working on a presentation to give. I like making presentations, but it is a lot of work. This spring I'll have that time back.

In any case, this will be a new way to experience the Python community. See you all in 2018 in Cleveland!

Categories: FLOSS Project Planets

Dirk Eddelbuettel: RcppTOML 0.1.0

Planet Debian - Thu, 2017-01-05 21:57

Big news: RcppTOML now works on Windows too!

This package had an uneventful 2016 without a single update. Release 0.0.5 had come out in late 2015 and we had no bugs or issues to fix. We use the package daily in production: a key part of our parameterisation is in TOML files

In the summer, I took one brief stab at building on Windows now that R sports itself a proper C++11 compiler on Windows too. I got stuck over the not-uncommon problem of incomplete POSIX and/or C++11 support with MinGW and g++-4.9. And sadly ... I appears I wasn't quite awake enough to realize that the missing functionality was right there exposed by Rcpp! Having updated that date / datetime functionality very recently, I was in a better position to realize this when Devin Pastoor asked two days ago. I was able to make a quick suggestion which he tested, which I then refined ... here we are: RcppTOML on Windows too! (For the impatient: CRAN has reported that it has built the Windows binaries, they should hit mirrors such as this CRAN package for RcppTOML shortly.)

So what is this TOML thing, you ask? A file format, very suitable for configurations, meant to be edited by humans but read by computers. It emphasizes strong readability for humans while at the same time supporting strong typing as well as immediate and clear error reports. On small typos you get parse errors, rather than silently corrupted garbage. Much preferable to any and all of XML, JSON or YAML -- though sadly these may be too ubiquitous now. But TOML is making good inroads with newer and more flexible projects. The Hugo static blog compiler is one example; the Cargo system of Crates (aka "packages") for the Rust language is another example.

The new release updates the included cpptoml template header by Chase Geigle, brings the aforementioned Windows support and updates the Travis configuration. We also added a NEWS file for the first time so here are all changes so far:

Changes in version 0.1.0 (2017-01-05)
  • Added Windows support by relying on Rcpp::mktime00() (#6 and #8 closing #5 and #3)

  • Synchronized with cpptoml upstream (#9)

  • Updated Travis CI support via newer run.sh

Changes in version 0.0.5 (2015-12-19)
  • Synchronized with cpptoml upstream (#4)

  • Improved and extended examples

Changes in version 0.0.4 (2015-07-16)
  • Minor update of upstream cpptoml.h

  • More explicit call of utils::str()

  • Properly cope with empty lists (#2)

Changes in version 0.0.3 (2015-04-27)
  • First CRAN release after four weeks of initial development

Courtesy of CRANberries, there is a diffstat report for this release.

More information and examples are on the RcppTOML page. Issues and bugreports should go to the GitHub issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Joey Hess: the cliff

Planet Debian - Thu, 2017-01-05 20:12

Falling off the cliff is always a surprise. I know it's there; I've been living next to it for months. I chart its outline daily. Avoiding it has become routine, and so comfortable, and so failing to avoid it surprises.

Monday evening around 10 pm, the laptop starts draining down from 100%. House battery, which has been steady around 11.5-10.8 volts since well before Winter solstice, and was in the low 10's, has plummeted to 8.5 volts.

With the old batteries, the cliff used to be at 7 volts, but now, with new batteries but fewer, it's in a surprising place, something like 10 volts, and I fell off it.

Weather forecast for the week ahead is much like the previous week: Maybe a couple sunny afternoons, but mostly no sun at all.

Falling off the cliff is not all bad. It shakes things up. It's a good opportunity to disconnect, to read paper books, and think long winter thoughts. It forces some flexability.

I have an auxillary battery for these situations. With its own little portable solar panel, it can charge the laptop and run it for around 6 hours. But it takes it several days of winter sun to charge back up.

That's enough to get me through the night. Then I take a short trip, and glory in one sunny afternoon. But I know that won't get me out of the hole, the batteries need a sunny week to recover. This evening, I expect to lose power again, and probably tomorrow evening too.

Luckily, my goal for the week was to write slides for two talks, and I've been able to do that despite being mostly offline, and sometimes decomputered.

And, in a few days I will be jetting off to Australia! That should give the batteries a perfect chance to recover.

Previously: battery bank refresh late summer

Categories: FLOSS Project Planets

ARREA-Systems: Simple twitter feed with custom block

Planet Drupal - Thu, 2017-01-05 20:08
Simple twitter feed with custom block Fri, 01/06/2017 - 08:08 In this article, we will present how we built a simple twitter feed in Drupal 8 with custom block and without any custom module. This block will display a list of tweets pulled from a custom list as in the example shown in the side bar.
Categories: FLOSS Project Planets

Plasma 5.8.5 and Applications 16.12 by KDE now available in Chakra

Planet KDE - Thu, 2017-01-05 19:51


This announcement is also available in Spanish and Taiwanese Mandarin.

The latest updates for KDE's Plasma and Applications series are now available to all Chakra users, together with other important package upgrades.

Plasma 5.8.5 provides another round of bugfixes and translation to the 5.8 release, with changes found mostly in the plasma-desktop, plasma-workspace and kscreen packages.

Applications 16.12.0 is the first release of a new series and comes with several changes. kdelibs has been updated to 4.14.27.

1. New features have been introduced in many packages:

  • marble ships with a live day/night Plasma wallpaper of earth and a new widget.
  • kcharselect can now show emoticons and allows you to bookmark your favorite characters.
  • cantor gained support for julia.
  • ark gained support for file and folder renaming, as well as copying and moving packages inside the archive. You can now choose compression and encryption algorithms when creating archives and also open AR files, e.g. Linux *.a static libraries.
  • kopete can use X-OAUTH2 SASL authentication with the jabber protocol and the OTR encryption plugin received some fixes.
  • kdenlive has a new Rotoscoping effect, support for downloadable content and an updated Motion Tracker.
  • kmail and akregator implement Google Safe Browsing to check for malicious links and can now print documents.

    2. The following packages have now been ported to KDE Frameworks 5, with new features introduced in many cases:

  • audiocd-kio
  • kalzium
  • kdegraphics-mobipocket
  • kdialog
  • keditbookmarks
  • kfind
  • kgpg
  • konqueror
  • kqtquickcharts
  • ktouch
  • libkcddb
  • libkcompactdisc
  • okular
  • svgpart

    3. kdepim further split into new packages: akonadi-calendar-tools, akonadi-import-wizard, grantlee-editor, kmail-account-wizard, mbox-importer, pim-data-exporter, pim-sieve-editor, pim-storage-service-manager
    If you want to install the whole kdepim group you can use:
    sudo pacman -S kdepim

    4. The following packages are no longer supported by KDE and have been dropped from our repositories. You should remove them from your system manually if you do not use them but happen to have them installed:
  • kdepim-common. In case this conflicts with another package (like kmail-account-wizard), it is safe to manually remove it with sudo pacman -Rdd kdepim-common
  • kdepim-console
  • kde-baseapps-kdepasswd
  • kdgantt2
  • gpgmepp
  • kuser

    In addition, the following notable packages have been updated:

    [core]

  • qt5 group 5.7.1
  • mesa 13.0.2
  • llvm 3.9.1
  • gpgme 1.8.0
  • gnupg 2.1.17

    [desktop]

  • kdelibs 4.14.27
  • qtcreator 4.2.0

    [lib32]

  • winetricks 20161228

    It should be safe to answer yes to any replacement question by pacman. If in doubt or if you face another issue in relation to this update, please ask or report it on the related forum section.

    Most of our mirrors take 12-24h to synchronize, after which it should be safe to upgrade. To be sure, please use the mirror status page to check that your mirror synchronized with our main server after this announcement.
  • Categories: FLOSS Project Planets

    When does your career begin?

    Planet KDE - Thu, 2017-01-05 18:15

    My first contact with technology, and I mean the first time that I touched a computer, was when I was eleven years old. My mom subscribed me in an initial course about informatics in a public institute at my hometown. And since then I made all courses about technology that I could put my hands on. When I was in High School and my school bought a laptop so we could do presentations, was me that removed the thousand of the virus and solved the issues, so the laptop could be useful. So it makes sense go to college in the tech area.

    In 2011 I moved out from my hometown so I could start college. Computer Science was my choice. And my thinking until beginning of 2015 was: I will go to college, will finish, maybe do a master’s degree and then get a job.

    Well, that plan didn’t work out. I was thinking that my career would only begin after I finish college, however, the way that technology is evolving, we can’t wait for after college.

    When 2015 started, I was planning one more year of college, but then I decided to go to one of the biggest events related to the technology of Brasil. When I got back home, I realized that I couldn’t wait. I need it to go to more events, do networking, meet people with the same interests or different. I need it to learn more that I was learning in the 4 walls of my classroom.

    And was with that work, were at the end of 2016 I could say: I have a career.  And I can’t say when it started. I only know that I have one.

    I don’t know if you can say: Was on this day, several years ago that my career started.

    I can’t.

    The moment that I started to realize that I had a career, was when I was at The Developers Conference in October of last year. For the first time, I was playing with my Arduino and a strip of led. On that moment, that I put the leds on with a code that I made on Arduino IDE. That moment, of full happiness, could not happen if I stayed at my home… I was able to do that because of all background that I built.

    When I realized that, my view of my life and future changed radically. I wasn’t a person that planned the future. I used to live the moment(Still do sometimes). I couldn’t plan more than the lunch of the next day. Maybe I have matured. Maybe now with my 24 years old I have more experience of life.

    I just know that I think that I’m on the correct path. I’m discovering my values, my weakness, and my strengths while trying to build a strong career. Without leaving my personal life behind.

    Don’t get worried when you career will start, just make sure when you have one, that you are doing the best to build the best career that you want for yourself.

     

    “We’re all stories in the end just make it a good one”

    Eleventh Doctor.

     

     


    Categories: FLOSS Project Planets

    Mike Driscoll: wxPython Cookbook Artist Interview: Liza Tretyakova

    Planet Python - Thu, 2017-01-05 17:45

    I always put a lot of thought into the covers of my book. For my first book on wxPython, I thought it would be fun to do a cookbook because I already had a lot of recipes on my blog. So I went with the idea of doing a cookbook. For the cover, my first thought was to have some kind of kitchen scene with mice cooks. Then I decided that was too obvious and decided to go with the idea of an Old West cover with cowboy (or cow mice) cooking at a fire.

    I asked Liza Tretyakova, my cover artist for wxPython Cookbook, to do a quick interview about herself. Here is what she had to say:

    Can you tell us a little about yourself (hobbies, education, etc):

    My name is Liza Tretyakova, I’m a free-lance illustrator currently working in Moscow.

    Education:

    • Moscow State University, Faculty of History of Arts
    • BA(Hons) Illustration, University of Plymouth


    I work as an illustrator for about 10 years. I love horses and I used to have a horse. Also I’m interested in archery. I like reading and spending a lot of time with my daughter Yara, who is 7 years old.

    What motivated you to be an illustrator versus some other profession?

    Since I was a child I have been drawing all the time and it just happened that I started to work as an illustrator, it turned into a profession.

    What process do you go through when you are creating a new piece of art?

    It is different every time, there is no specific “recipe”

    Categories: FLOSS Project Planets

    João Laia: Multiprocessing in Python via C

    Planet Python - Thu, 2017-01-05 17:39
    Python plays very well with both C and Fortran. It is relatively easy to extend it using these languages, and to run very fast code in python. Additionally, using the OpenMP API, it is easy to parallelize it.
    Categories: FLOSS Project Planets

    Lullabot: Journey Into the Year of the Fire Chicken

    Planet Drupal - Thu, 2017-01-05 17:00
    Mike & Matt are joined by a gaggle of Lullabots to talk successes and failures of 2016, and what we're looking forward to in 2017.
    Categories: FLOSS Project Planets

    FSF Blogs: Friday New Resolve Free Software Directory IRC meetup: January 6th starting at 12 p.m. EST/17:00 UTC

    GNU Planet! - Thu, 2017-01-05 16:04

    Participate in supporting the FSD by adding new entries and updating existing ones. We will be on IRC in the #fsf channel on irc.freenode.org.

    Tens of thousands of people visit directory.fsf.org each month to discover free software. Each entry in the FSD contains a wealth of useful information, from basic category and descriptions, to providing detailed info about version control, IRC channels, documentation, and licensing info that has been carefully checked by FSF staff and trained volunteers.

    While the FSD has been and continues to be a great resource to the world over the past decade, it has the potential of being a resource of even greater value. But it needs your help!

    This week's theme is focusing on adding new packages to the FSD. With a new year, it's time to add some new packages. While the FSD already lists over 15,000 packages, there are more out there that need to be added. We want to start 2017 off with some fresh packages.

    If you are eager to help and you can't wait or are simply unable to make it onto IRC on Friday, our participation guide will provide you with all the information you need to get started on helping the FSD today! There are also weekly FSD Meetings pages that everyone is welcome to contribute to before, during, and after each meeting.

    Categories: FLOSS Project Planets

    Friday New Resolve Free Software Directory IRC meetup: January 6th starting at 12 p.m. EST/17:00 UTC

    FSF Blogs - Thu, 2017-01-05 16:04

    Participate in supporting the FSD by adding new entries and updating existing ones. We will be on IRC in the #fsf channel on irc.freenode.org.

    Tens of thousands of people visit directory.fsf.org each month to discover free software. Each entry in the FSD contains a wealth of useful information, from basic category and descriptions, to providing detailed info about version control, IRC channels, documentation, and licensing info that has been carefully checked by FSF staff and trained volunteers.

    While the FSD has been and continues to be a great resource to the world over the past decade, it has the potential of being a resource of even greater value. But it needs your help!

    This week's theme is focusing on adding new packages to the FSD. With a new year, it's time to add some new packages. While the FSD already lists over 15,000 packages, there are more out there that need to be added. We want to start 2017 off with some fresh packages.

    If you are eager to help and you can't wait or are simply unable to make it onto IRC on Friday, our participation guide will provide you with all the information you need to get started on helping the FSD today! There are also weekly FSD Meetings pages that everyone is welcome to contribute to before, during, and after each meeting.

    Categories: FLOSS Project Planets

    FSF Blogs: Free Software Directory meeting recap for December 30th, 2016

    GNU Planet! - Thu, 2017-01-05 15:57

    Every week free software activists from around the world come together in #fsf on irc.freenode.org to help improve the Free Software Directory. This recaps the work we accomplished on the Friday, December 30th, 2016 meeting.

    Last week's theme was looking back on 2016 and working on some of the projects we started throughout the year. There was a good long discussion about improving tools for working on the Directory. Particularly, the need for a bug tracker for working on feature requests and reporting issues with the Directory. mattl offered to help set one up for the Directory. There was also Discussion about particular features that could be added to the Directory, such as the ability to upload pictures. More discussion will have to happen on these topics in deciding what new resources would be most useful.

    ballpen also joined us and was trained on how to work on the Directory, and we look forward to them joining us again at future meetings. The meeting and year wrapped up with adding many new packages, which will be the theme of the first meeting of 2017.

    If you would like to help update the directory, meet with us every Friday in #fsf on irc.freenode.org from 12 p.m. to 3 p.m. EST (17:00 to 20:00 UTC).

    Categories: FLOSS Project Planets

    Free Software Directory meeting recap for December 30th, 2016

    FSF Blogs - Thu, 2017-01-05 15:57

    Every week free software activists from around the world come together in #fsf on irc.freenode.org to help improve the Free Software Directory. This recaps the work we accomplished on the Friday, December 30th, 2016 meeting.

    Last week's theme was looking back on 2016 and working on some of the projects we started throughout the year. There was a good long discussion about improving tools for working on the Directory. Particularly, the need for a bug tracker for working on feature requests and reporting issues with the Directory. mattl offered to help set one up for the Directory. There was also Discussion about particular features that could be added to the Directory, such as the ability to upload pictures. More discussion will have to happen on these topics in deciding what new resources would be most useful.

    ballpen also joined us and was trained on how to work on the Directory, and we look forward to them joining us again at future meetings. The meeting and year wrapped up with adding many new packages, which will be the theme of the first meeting of 2017.

    If you would like to help update the directory, meet with us every Friday in #fsf on irc.freenode.org from 12 p.m. to 3 p.m. EST (17:00 to 20:00 UTC).

    Categories: FLOSS Project Planets

    Eli Bendersky: Some notes on Luz - an assembler, linker and CPU simulator

    Planet Python - Thu, 2017-01-05 10:27

    A few years ago I wrote about Luz - a self-educational project to implement a CPU simulator and a toolchain for it, consisting of an assembler and a linker. Since then, I received some questions by email that made me realize I could do a better job explaining what the project is and what one can learn from it.

    So I went back to the Luz repository and fixed it up to be more modern, in-line with current documentation standards on GitHub. The landing README page should now provide a good overview, but I also wanted to write up some less formal documentation I could point to - a place to show-off some of the more interesting features in Luz; a blog post seemed like the perfect medium for this.

    As before, it makes sense to start with the Luz toplevel diagram:

    Luz is a collection of related libraries and programs written in Python, implementing all the stages shown in the diagram above.

    The CPU simulator

    The Luz CPU is inspired by MIPS (for the instruction set), by Altera Nios II (for the way "peripherals" are attached to the CPU), and by MPC 555 (for the memory controller) and is aimed at embedded uses, like Nios II. The Luz user manual lists the complete instruction set explaining what each instructions means.

    The simulator itself is functional only - it performs the instructions one after the other, without trying to simulate how long their execution takes. It's not very remarkable and is designed to be simple and readable. The most interesting feature it has, IMHO, is how it maps "peripherals" and even CPU control registers into memory. Rather than providing special instructions or traps for OS system calls, Luz facilitates "bare-metal" programming (by which I mean, without an OS) by mapping "peripherals" into memory, allowing the programmer to access them by reading and writing special memory locations.

    My inspiration here was soft-core embeddable CPUs like Nios II, which let you configure what peripherals to connect and how to map them. The CPU can be configured before it's loaded onto real HW, for example to attach as many SPI interfaces as needed. For Luz, to create a new peripheral and attach it to the simulator one implements the Peripheral interface:

    class Peripheral(object): """ An abstract memory-mapped perhipheral interface. Memory-mapped peripherals are accessed through memory reads and writes. The address given to reads and writes is relative to the peripheral's memory map. Width is 1, 2, 4 for byte, halfword and word accesses. """ def read_mem(self, addr, width): raise NotImplementedError() def write_mem(self, addr, width, data): raise NotImplementedError()

    Luz implements some built-in features as peripherals as well; for example, the core registers (interrupt control, exception control, etc). The idea here is that embedded CPUs can have multiple custom "registers" to control various features, and creating dedicated names for them bloats instruction encoding (you need 5 bits to encode one of 32 registers, etc.); it's better to just map them to memory.

    Another example is the debug queue - a peripheral useful for testing and debugging. It's a single word mapped to address 0xF0000 in the simulator. When the peripheral gets a write, it stores it in a special queue and optionally emits the value to stdout. The queue can later be examined. Here is a simple Luz assembly program that makes use of it:

    # Counts from 0 to 9 [inclusive], pushing these numbers into the debug queue .segment code .global asm_main .define ADDR_DEBUG_QUEUE, 0xF0000 asm_main: li $k0, ADDR_DEBUG_QUEUE li $r9, 10 # r9 is the loop limit li $r5, 0 # r5 is the loop counter loop: sw $r5, 0($k0) # store loop counter to debug queue addi $r5, $r5, 1 # increment loop counter bltu $r5, $r9, loop # loop back if not reached limit halt

    Using the interactive runner to run this program we get:

    $ python run_test_interactive.py loop_simple_debugqueue DebugQueue: 0x0 DebugQueue: 0x1 DebugQueue: 0x2 DebugQueue: 0x3 DebugQueue: 0x4 DebugQueue: 0x5 DebugQueue: 0x6 DebugQueue: 0x7 DebugQueue: 0x8 DebugQueue: 0x9 Finished successfully... Debug queue contents: ['0x0', '0x1', '0x2', '0x3', '0x4', '0x5', '0x6', '0x7', '0x8', '0x9'] Assembler

    There's a small snippet of Luz assembly shown above. It's your run-of-the-mill RISC assembly, with the familiar set of instructions, fairly simple addressing modes and almost every instruction requiring registers (note how we can't store into the debug queue directly, for example, without dereferencing a register that holds its address).

    The Luz user manual contains a complete reference for the instructions, including their encodings. Every instruction is a 32-bit word, with the 6 high bits for the opcode (meaning up to 64 distinct instructions are supported).

    The code snippet also shows off some special features of the full Luz toolchain, like the special label asm_main. I'll discuss these later on in the section about linking.

    Assembly languages are usually fairly simple to parse, and Luz is no exception. When I started working on Luz, I decided to use the PLY library for the lexer and parser mainly because I wanted to play with it. These days I'd probably just hand-roll a parser.

    Luz takes another cool idea from MIPS - register aliases. While the assembler doesn't enforce any specific ABI on the coder, some conventions are very important when writing large assembly programs, and especially when interfacing with routines written by other programmers. To facilitate this, Luz designates register aliases for callee-saved registers and temporary registers.

    For example, the general-purpose register number 19 can be referred to in Luz assembly as $r19 but also as $s1 - the callee-saved register 1. When writing standalone Luz programs, one is free to ignore these conventions. To get a taste of how ABI-conformant Luz assembly would look, take a look at this example.

    To be honest, ABI was on my mind because I was initially envisioning a full programming environment for Luz, including a C compiler. When you have a compiler, you must have some set of conventions for generated code like procedure parameter passing, saved registers and so on; in other words, the platform ABI.

    Linker

    In my view, one of the distinguishing features of Luz from other assembler projects out there is the linker. Luz features a full linker that supports creating single "binaries" from multiple assembly files, handling all the dirty work necessary to make that happen. Each assembly file is first "assembled" into a position-independent object file; these are glued together by the linker which applies the necessary relocations to resolve symbols across object files. The prime sieve example shows this in action - the program is divided into three .lasm files: two for subroutines and one for "main".

    As we've seen above, the main subroutine in Luz is called asm_main. This is a special name for the linker (not unlike the _start symbol for modern Linux assemblers). The linker collects a set of object files produced by assembly, and makes sure to invoke asm_main from the special location 0x100000. This is where the simulator starts execution.

    Luz also has the concept of object files. They are not unlike ELF images in nature: there's a segment table, an export table and a relocation table for each object, serving the expected roles. It is the job of the linker to make sense in this list of objects and correctly connect all call sites to final subroutine addresses.

    Luz's standalone assembler can write an assembled image into a file in Intel HEX format, a popular format used in embedded systems to encode binary images or data in ASCII.

    The linker was quite a bit of effort to develop. Since all real Luz programs are small I didn't really need to break them up into multiple assembly files; but I really wanted to learn how to write a real linker :) Moreover, as already mentioned my original plans for Luz included a C compiler, and that would make a linker very helpful, since I'd need to link some "system" code into the user's program. Even today, Luz has some "startup code" it links into every image:

    # The special segments added by the linker. # __startup: 3 words # __heap: 1 word # LINKER_STARTUP_CODE = string.Template(r''' .segment __startup LI $$sp, ${SP_POINTER} CALL asm_main .segment __heap .global __heap __heap: .word 0 ''')

    This code sets up the stack pointer to the initial address allocated for the stack, and calls the user's asm_main.

    Debugger and disassembler

    Luz comes with a simple program runner that will execute a Luz program (consisting of multiple assembly files); it also has an interactive mode - a debugger. Here's a sample session with the simple loop example shown above:

    $ python run_test_interactive.py -i loop_simple_debugqueue LUZ simulator started at 0x00100000 [0x00100000] [lui $sp, 0x13] >> set alias 0 [0x00100000] [lui $r29, 0x13] >> s [0x00100004] [ori $r29, $r29, 0xFFFC] >> s [0x00100008] [call 0x40003 [0x10000C]] >> s [0x0010000C] [lui $r26, 0xF] >> s [0x00100010] [ori $r26, $r26, 0x0] >> s [0x00100014] [lui $r9, 0x0] >> s [0x00100018] [ori $r9, $r9, 0xA] >> s [0x0010001C] [lui $r5, 0x0] >> s [0x00100020] [ori $r5, $r5, 0x0] >> s [0x00100024] [sw $r5, 0($r26)] >> s [0x00100028] [addi $r5, $r5, 0x1] >> s [0x0010002C] [bltu $r5, $r9, -2] >> s [0x00100024] [sw $r5, 0($r26)] >> s [0x00100028] [addi $r5, $r5, 0x1] >> s [0x0010002C] [bltu $r5, $r9, -2] >> s [0x00100024] [sw $r5, 0($r26)] >> s [0x00100028] [addi $r5, $r5, 0x1] >> r $r0 = 0x00000000 $r1 = 0x00000000 $r2 = 0x00000000 $r3 = 0x00000000 $r4 = 0x00000000 $r5 = 0x00000002 $r6 = 0x00000000 $r7 = 0x00000000 $r8 = 0x00000000 $r9 = 0x0000000A $r10 = 0x00000000 $r11 = 0x00000000 $r12 = 0x00000000 $r13 = 0x00000000 $r14 = 0x00000000 $r15 = 0x00000000 $r16 = 0x00000000 $r17 = 0x00000000 $r18 = 0x00000000 $r19 = 0x00000000 $r20 = 0x00000000 $r21 = 0x00000000 $r22 = 0x00000000 $r23 = 0x00000000 $r24 = 0x00000000 $r25 = 0x00000000 $r26 = 0x000F0000 $r27 = 0x00000000 $r28 = 0x00000000 $r29 = 0x0013FFFC $r30 = 0x00000000 $r31 = 0x0010000C [0x00100028] [addi $r5, $r5, 0x1] >> s 100 [0x00100030] [halt] >> q

    There are many interesting things here demonstrating how Luz works:

    • Note the start up at 0x1000000 - this is where Luz places the start-up segment - three instructions that set up the stack pointer and then call the user's code (asm_main). The user's asm_main starts running at the fourth instruction executed by the simulator.
    • li is a pseudo-instruction, broken into two real instructions: lui for the upper half of the register, followed by ori for the lower half of the register. The reason for this is li having a 32-bit immediate, which can't fit in a Luz instruction. Therefore, it's broken into two parts which only need 16-bit immediates. This trick is common in RISC ISAs.
    • Jump labels are resolved to be relative by the assembler: the jump to loop is replaced by -2.
    • Disassembly! The debugger shows the instruction decoded from every word where execution stops. Note how this exposes pseudo-instructions.
    The in-progress RTL implementation

    Luz was a hobby project, but an ambitious one :-) Even before I wrote the first line of the assembler or simulator, I started working on an actual CPU implementation in synthesizable VHDL, meaning to get a complete RTL image to run on FPGAs. Unfortunately, I didn't finish this part of the project and what you find in Luz's experimental/luz_uc directory is only 75% complete. The ALU is there, the registers, the hookups to peripherals, even parts of the control path - dealing with instruction fetching, decoding, etc. My original plan was to implement a pipelined CPU (a RISC ISA makes this relatively simple), which perhaps was a bit too much. I should have started simpler.

    Conclusion

    Luz was an extremely educational project for me. When I started working on it, I mostly had embedded programming experience and was just starting to get interested in systems programming. Luz flung me into the world of assemblers, linkers, binary images, calling conventions, and so on. Besides, Python was a new language for me at the time - Luz started just months after I first got into Python.

    Its ~8000 lines of Python code are thus likely not my best Python code, but they should be readable and well commented. I did modernize it a bit over the years, for example to make it run on both Python 2 and 3.

    I still hope to get back to the RTL implementation project one day. It's really very close to being able to run realistic assembly programs on real hardware (FPGAs). My dream back then was to fully close the loop by adding a Luz code genereation backend to pycparser. Maybe I'll still fulfill it one day :-)

    Categories: FLOSS Project Planets

    PyTennessee: PyTN Profiles: Brandon Wannamaker and Emma

    Planet Python - Thu, 2017-01-05 10:26

    Speaker Profile: Brandon Wannamaker (@huntgathergrow)

    Brandon is a husband, hiker, computer nerd and backyard farmer. He’s currently a Quality Engineer at Emma in Nashville.

    Brandon will be presenting “A Developer’s Guide to Full Stack Personal Maintenance” at 11:00AM Saturday (2/4) in the Room 300. Developers spend a lot of time in front of a screen, not moving very much and often not eating good food. In this talk, I’ll share how yoga, my wife & bacon are helping me refactor & maintain my own personal stack.

    Sponsor Profile: Emma (@emmaemail)

    Emma is a provider of best-in-class email marketing software and services that help organizations of all sizes get more from their marketing. Through tailored editions of its platform for businesses, franchises, retailers, universities and agencies, Emma aims to offer enterprise-level capabilities in a team-friendly experience that’s simple and enjoyable. Key features include mobile-ready design templates, email automation, audience segmenting and dynamic content, plus integration with top CRM solutions, ecommerce platforms and social networks. Headquartered in Nashville, and with offices in Portland, New York and Melbourne, Emma powers the emails of more than 50,000 organizations worldwide, including Mario Batali, Bridgestone and Sylvan Learning Center. To learn more, visit myemma.com, follow Emma on Twitter or find us on Facebook.

    Categories: FLOSS Project Planets

    Jamie McClelland: End-to-End Encrypted group chats via XMPP

    Planet Debian - Thu, 2017-01-05 10:10

    It's been over a year since my colleagues and I at the Progressive Technology Project abandoned Skype, first for IRC and soon after for XMPP. Thanks to the talented folks maintaining conversations.im it's been a breeze to get everyone setup with accounts (8 Euros/year is quite worth it) and a group chat going.

    However, our group chats have not been using end-to-end encryption... until now. It wasn't exactly painless, so I'm sharing some tips and tricks.

    • Use either Conversations for Android (f-droid or Play) or Gajim for Windows or Linux. At the time of this writing, these are the only two applications I know of that support OMEMO, the XMPP extension that supports end-to-end encryption. Chat Secure for iOS, however, is just a release away. We managed to get things working with most of us using both Gajim and Conversations. It would probably have been much easier and smoother if everyone were only using Conversations because OMEMO is built-in to core, rather than Gajim, where OMEMO support is provided via an extension.
    • If you are using Gajim... After installing the OMEMO plugin in Gajim, fully restart Gajim. Similarly, if you add or remove a contact from your group, it seems you have to fully restart Gajim. Not sure why. If something is not working in Gajim, try restarting it.
    • Ensure that everyone in your group has added everyone else in the group to their roster. This was the single biggest and most confusing part of the process. If you are missing just one contact in your roster, then messages you type into the group chat will not show up without any indication as to what happened or why (on Gajim). Take this step first or prepare for confusing failures. Remember: everyone has to have everyone else in their roster.
    • Create the group in the android Conversations app, not in Gajim. There are strict requirements for how the group needs to be setup (private, members only and non-anonymous). I tried creating the group in Gajim and followed the directions but couldn't get it to work. Creating the group in Conversations worked right away. Remember: don't add members to the group unless everyone has them in their roster!
    • You can give your group a easy to remember name in your Gajim bookmarks, but under the hood, it will be assigned a random name. Conversations will show you the random name via "Conference Details" and Gajim will show it under the tab in the Messages window. When inviting people to the group you may need to select the random name.
    • Trust on First Use. In our experiment, we created a group for four people and we were all on a video and voice chat while we set things up. Three out of the four of us had both Gajim and Conversations in play. That meant 4 different people had to verify between 5 and 6 fingerprints each. We decided to use Trust on First Use rather than go through the process of reading out all the fingerprints (for the record, it still took us an hour and 15 minutes to get it all working). See Daniel Gultsch's interesting article on Trust on First Use.
    • If you get an error "This is not a group chat" it may be because you accidentally added the group as a contact to your roster. Click View -> Offline contacts. And if you see your group listed, delete it and close the tab in your Messages window (if one is open for it). You may also need to restart Gajim. Repeat until it no longer shows up in your roster.

    Anyone interested in secure XMPP may also find the Riseup XMPP page useful.

    Categories: FLOSS Project Planets

    Corey Oordt: The road to Docker, Django and Amazon ECS, part 4

    Planet Python - Thu, 2017-01-05 09:54

    For part 1

    For part 2

    For part 3

    Putting together a Dockerfile

    I couldn't wait any longer, so I wanted to see it running in Docker!

    Choosing a Linux distrobution

    We want the absolute smallest container we can get to run our project. The container is going to run Linux. We currently have Ubuntu on our servers, but default Ubuntu includes lots of stuff we don't need.

    We chose Alpine Linux because it was small and had a large set of packages to install.

    Setting up the Dockerfile

    We based our Dockerfile on João Ferreira Loff's Alpine Linux Python 2.7 slim image.

    FROM alpine:3.5 # Install needed packages. Notes: # * dumb-init: a proper init system for containers, to reap zombie children # * musl: standard C library # * linux-headers: commonly needed, and an unusual package name from Alpine. # * build-base: used so we include the basic development packages (gcc) # * bash: so we can access /bin/bash # * git: to ease up clones of repos # * ca-certificates: for SSL verification during Pip and easy_install # * python2: the binaries themselves # * python2-dev: are used for gevent e.g. # * py-setuptools: required only in major version 2, installs easy_install so we can install Pip. # * build-base: used so we include the basic development packages (gcc) # * linux-headers: commonly needed, and an unusual package name from Alpine. # * python-dev: are used for gevent e.g. # * postgresql-client: for accessing a PostgreSQL server # * postgresql-dev: for building psycopg2 # * py-lxml: instead of using pip to install lxml, this is faster. Must make sure requirements.txt has correct version # * libffi-dev: for compiling Python cffi extension # * tiff-dev: For Pillow: TIFF support # * jpeg-dev: For Pillow: JPEG support # * openjpeg-dev: For Pillow: JPEG 2000 support # * libpng-dev: For Pillow: PNG support # * zlib-dev: For Pillow: # * freetype-dev: For Pillow: TrueType support # * lcms2-dev: For Pillow: Little CMS 2 support # * libwebp-dev: For Pillow: WebP support # * gdal: For some Geo capabilities # * geos: For some Geo capabilities ENV PACKAGES="\ dumb-init \ musl \ linux-headers \ build-base \ bash \ git \ ca-certificates \ python2 \ python2-dev \ py-setuptools \ build-base \ linux-headers \ python-dev \ postgresql-client \ postgresql-dev \ py-lxml \ libffi-dev \ tiff-dev \ jpeg-dev \ openjpeg-dev \ libpng-dev \ zlib-dev \ freetype-dev \ lcms2-dev \ libwebp-dev \ gdal \ geos \ " RUN echo \ # replacing default repositories with edge ones && echo "http://dl-cdn.alpinelinux.org/alpine/edge/testing" > /etc/apk/repositories \ && echo "http://dl-cdn.alpinelinux.org/alpine/edge/community" >> /etc/apk/repositories \ && echo "http://dl-cdn.alpinelinux.org/alpine/edge/main" >> /etc/apk/repositories \ # Add the packages, with a CDN-breakage fallback if needed && apk add --no-cache $PACKAGES || \ (sed -i -e 's/dl-cdn/dl-4/g' /etc/apk/repositories && apk add --no-cache $PACKAGES) \ # make some useful symlinks that are expected to exist && if [[ ! -e /usr/bin/python ]]; then ln -sf /usr/bin/python2.7 /usr/bin/python; fi \ && if [[ ! -e /usr/bin/python-config ]]; then ln -sf /usr/bin/python2.7-config /usr/bin/python-config; fi \ && if [[ ! -e /usr/bin/easy_install ]]; then ln -sf /usr/bin/easy_install-2.7 /usr/bin/easy_install; fi \ # Install and upgrade Pip && easy_install pip \ && pip install --upgrade pip \ && if [[ ! -e /usr/bin/pip ]]; then ln -sf /usr/bin/pip2.7 /usr/bin/pip; fi \ && echo # Chaining the ENV allows for only one layer, instead of one per ENV statement ENV HOMEDIR=/code \ LANG=en_US.UTF-8 \ LC_ALL=en_US.UTF-8 \ PYTHONUNBUFFERED=1 \ NEW_RELIC_CONFIG_FILE=$HOMEDIR/newrelic.ini \ GUNICORNCONF=$HOMEDIR/conf/docker_gunicorn_conf.py \ GUNICORN_WORKERS=2 \ GUNICORN_BACKLOG=4096 \ GUNICORN_BIND=0.0.0.0:8000 \ GUNICORN_ENABLE_STDIO_INHERITANCE=True \ DJANGO_SETTINGS_MODULE=settings WORKDIR $HOMEDIR # Copying this file over so we can install requirements.txt in one cache-able layer COPY requirements.txt $HOMEDIR/ RUN pip install --upgrade pip \ && pip install -r $HOMEDIR/requirements.txt # Copy the code COPY . $HOMEDIR EXPOSE 8000 CMD ["sh", "-c", "$HOMEDIR/docker-entrypoint.sh"]

    The first change that we made was to use Alpine Linux version 3.5, which has just been released.

    Next we listed all the OS-level packages we'll need in the PACKAGES environment variable.

    The next RUN statement sets the package repositories to the edge version, installs the packages in PACKAGES, creates a few convenience symlinks, and installs pip for our Python installs.

    We set up all the environment variables next.

    After setting the working directory, we copy our requirements.txt file into the container and install all our requirements. We do this step separately so it creates a cached layer that won't change unless the requirements.txt file changes. This saves tons of time if you keep building and re-building the image.

    We copy all our code over to the container, tell the container to expose port 8000 and specify the command to run (unless we specify a different command at runtime).

    You'll notice that the command looks strange. Because of the way that Docker executes the commands, it can't substitute the environment variable HOMEDIR. So we have to actually prefix our command $HOMEDIR/docker-entrypoint.sh with sh -c.

    But there's something missing

    You'll notice in this version, there isn't any environment variables for the database, cache, or any other variables we set up earlier. We'll get them in there eventually, but for right now, we want to see if we can build and run this container and have it connect to our local database and cache.

    If you build it, it can run

    Building the docker image is really easy:

    docker build -t ngs:latest .

    This tags this built image as ngs:latest, which isn't what we are going to do in production, but it helps when testing everything.

    The output looks something like this:

    $ docker build -t ngs:latest . Sending build context to Docker daemon 76.43 MB Step 1 : FROM alpine:3.5 ---> 88e169ea8f46 Step 2 : ENV PACKAGES " dumb-init musl linux-headers build-base bash git ca-certificates python2 python2-dev py-setuptools build-base linux-headers python-dev postgresql-client postgresql-dev py-lxml libffi-dev tiff-dev jpeg-dev openjpeg-dev libpng-dev zlib-dev freetype-dev lcms2-dev libwebp-dev gdal geos " ---> Using cache ---> 184f9b7e79f9 Step 3 : RUN echo && echo "http://dl-cdn.alpinelinux.org/alpine/edge/testing" > /etc/apk/repositories && echo "http://dl-cdn.alpinelinux.org/alpine/edge/community" >> /etc/apk/repositories && echo "http://dl-cdn.alpinelinux.org/alpine/edge/main" >> /etc/apk/repositories && apk add --no-cache $PACKAGES || (sed -i -e 's/dl-cdn/dl-4/g' /etc/apk/repositories && apk add --no-cache $PACKAGES) && if [[ ! -e /usr/bin/python ]]; then ln -sf /usr/bin/python2.7 /usr/bin/python; fi && if [[ ! -e /usr/bin/python-config ]]; then ln -sf /usr/bin/python2.7-config /usr/bin/python-config; fi && if [[ ! -e /usr/bin/easy_install ]]; then ln -sf /usr/bin/easy_install-2.7 /usr/bin/easy_install; fi && easy_install pip && pip install --upgrade pip && if [[ ! -e /usr/bin/pip ]]; then ln -sf /usr/bin/pip2.7 /usr/bin/pip; fi && echo ---> Using cache ---> 514dcc2f010d Step 4 : ENV HOMEDIR /code LANG en_US.UTF-8 LC_ALL en_US.UTF-8 PYTHONUNBUFFERED 1 NEW_RELIC_CONFIG_FILE $HOMEDIR/newrelic.ini GUNICORNCONF $HOMEDIR/conf/docker_gunicorn_conf.py GUNICORN_WORKERS 2 GUNICORN_BACKLOG 4096 GUNICORN_BIND 0.0.0.0:8000 GUNICORN_ENABLE_STDIO_INHERITANCE True DJANGO_SETTINGS_MODULE settings ---> Running in 2d58f77c0a8e ---> 1342bb501c0f Removing intermediate container 2d58f77c0a8e Step 5 : WORKDIR $HOMEDIR ---> Running in a20a2fa64d2e ---> df977d30491c Removing intermediate container a20a2fa64d2e Step 6 : COPY requirements.txt $HOMEDIR/ ---> e6ae37797b36 Removing intermediate container 820e3406fb5c Step 7 : RUN pip install --upgrade pip && pip install -r $HOMEDIR/requirements.txt ---> Running in 4c65be60af03 Requirement already up-to-date: pip in /usr/lib/python2.7/site-packages/pip-9.0.1-py2.7.egg Collecting beautifulsoup4==4.5.1 (from -r /code/requirements.txt (line 2)) Downloading beautifulsoup4-4.5.1-py2-none-any.whl (83kB) Collecting cmsplugin-forms-builder==1.1.1 (from -r /code/requirements.txt (line 3)) ... Installing collected packages: beautifulsoup4, Django, ... Running setup.py install for future: started Running setup.py install for future: finished with status 'done' Installing from a newer Wheel-Version (1.1) Running setup.py install for unidecode: started Running setup.py install for unidecode: finished with status 'done' Successfully installed Django-1.8.15 Fabric-1.10.2 ... ---> 165f7ae9507e Removing intermediate container 4c65be60af03 Step 8 : COPY . $HOMEDIR ---> 1058d14b462f Removing intermediate container 55f77f2e60d6 Step 9 : EXPOSE 8000 ---> Running in 38e8c650a529 ---> 7c53dcf41f2a Removing intermediate container 38e8c650a529 Step 10 : CMD sh -c $HOMEDIR/docker-entrypoint.sh ---> Running in 1b8781bf6458 ---> a255a40e30b8 Removing intermediate container 1b8781bf6458 Successfully built a255a40e30b8

    I've truncated most of the output from installing the Python dependencies. If I run it again, steps 6 and 7 use the existing cache:

    Step 6 : COPY requirements.txt $HOMEDIR/ ---> Using cache ---> e6ae37797b36 Step 7 : RUN pip install --upgrade pip && pip install -r $HOMEDIR/requirements.txt ---> Using cache ---> 165f7ae9507e

    If I make changes to any other part of our project, steps 1-7 use the cache, and it only has to copy over the new code.

    How big is it?

    So how big is the container? Running docker images gives us:

    REPOSITORY TAG IMAGE ID CREATED SIZE ngs latest a255a40e30b8 11 minutes ago 590.1 MB

    So 590.1 MB. What makes up that space? We can take a look at the layers created by our Dockerfile. Running docker history ngs:latest returns:

    IMAGE CREATED CREATED BY SIZE COMMENT a255a40e30b8 7 minutes ago /bin/sh -c #(nop) CMD ["sh" "-c" "$HOMEDIR/d 0 B 7c53dcf41f2a 7 minutes ago /bin/sh -c #(nop) EXPOSE 8000/tcp 0 B 1058d14b462f 7 minutes ago /bin/sh -c #(nop) COPY dir:0da094a2328f4e5bfb 73.69 MB 165f7ae9507e 7 minutes ago /bin/sh -c pip install --upgrade pip && pip 227.1 MB e6ae37797b36 11 minutes ago /bin/sh -c #(nop) COPY file:25e352c295f212113 3.147 kB df977d30491c 11 minutes ago /bin/sh -c #(nop) WORKDIR /code 0 B 1342bb501c0f 11 minutes ago /bin/sh -c #(nop) ENV HOMEDIR=/code LANG=en_ 0 B 514dcc2f010d 3 days ago /bin/sh -c echo && echo "http://dl-cdn.alpi 285.3 MB 184f9b7e79f9 3 days ago /bin/sh -c #(nop) ENV PACKAGES= dumb-init 0 B 88e169ea8f46 6 days ago /bin/sh -c #(nop) ADD file:92ab746eb22dd3ed2b 3.984 MB

    At the bottom layer is the Alpine Linux 3.5 distro, which is only 3.984 MB. Our OS-level packages take up 285.3 MB. Our Python dependencies take up 227.1 MB. Our code is 73.69 MB.

    Make it run! Make it run!

    We want this container to connect to resources running on our local computer.

    Make PostgreSQL and Redis listen more

    My default installation of redis and PostgreSQL only listen for connections on the loopback address. I modified them to listen on every interface.

    Now my container will be able to connect to them.

    Give the container the address

    The container has no idea where it is running. Typically all the connections are made when Docker sets up the containers (and that is what we want, eventually). We need to inform the container on where it is running.

    We are going to do this with a temporary script called docker-run.sh

    #!/bin/bash export DOCKERHOST=$(ifconfig | grep -E "([0-9]{1,3}\.){3}[0-9]{1,3}" | grep -v 127.0.0.1 | awk '{ print $2 }' | cut -f2 -d: | head -n1) docker rm ngs-container docker run -ti \ -p 8000:8000 \ --add-host dockerhost:$DOCKERHOST \ --name ngs-container \ -e DATABASE_URL=postgresql://coordt:password@dockerhost:5432/education \ -e CACHE_URL=rediscache://dockerhost:6379/0?CLIENT_CLASS=site_ext.cacheclient.GracefulClient \ ngs:latest

    The first line sets DOCKERHOST environment variable to the local computer's current IP address.

    The second line removes any existing containers named ngs-container. Note: Docker doesn't clean up after itself very well. This is very well known, and there are several different solutions, I'm sure. After doing some Docker building and running, you end up with lots of unused images and containers. This script attempts to remove old containers by naming the container ngs-container each time.

    The last line tells docker to run the ngs:latest image, with a pseudo-tty and interactivity (-ti), map container port 8000 to local port 8000 (-p 8000:8000), adds dockerhost to the container's /etc/hosts file with the local computer's current IP address (--add-host dockerhost:$DOCKERHOST), name the container ngs-container (--name ngs-container), and sets the DATABASE_URL and CACHE_URL environment variables.

    Now, make docker-run.sh executable with a chmod a+x, and you can run it.

    $ ./docker-run.sh Copying '/code/static/concepts/jquery-textext.js' Copying '/code/static/autocomplete_light/addanother.js' ... Post-processed 'js/tiny_mce/plugins/inlinepopups/skins/clearlooks2/img/alert.gif' as 'js/tiny_mce/plugins/inlinepopups/skins/clearlooks2/img/alert.568d4cf84413.gif' Post-processed 'js/tiny_mce/plugins/inlinepopups/skins/clearlooks2/img/corners.gif' as 'js/tiny_mce/plugins/inlinepopups/skins/clearlooks2/img/corners.55298b5baaec.gif' ... 4256 static files copied to '/code/staticmedia', 4256 post-processed. Operations to perform: Synchronize unmigrated apps: redirects, ... Apply all migrations: teachingatlas, ... Synchronizing apps without migrations: Creating tables... Running deferred SQL... Installing custom SQL... Running migrations: No migrations to apply.

    If you remember from a previous post, the docker-entrypoint.sh runs two commands before it starts gunicorn.

    The first is collecting (and post-processing) the static media. I've truncated the output for copying and the post-processing of said static media, but you can see that it ran.

    The next is a database migration. I've truncated the output somewhat, but can see that nothing was required to migrate.

    Now when I try http://localhost:8000, I get a web page! Success!

    Next time

    In the next installment I'll get the container serving its own static files.

    Categories: FLOSS Project Planets

    Mediacurrent: Migration with Custom Values in Drupal 8

    Planet Drupal - Thu, 2017-01-05 09:39
    Custom Preprocessing

    In Drupal 7 there were numerous ways to preprocess data prior to migrating values. This was especially useful when inconsistencies in the data source needed to be addressed. One example I have dealt with in the past was migrating email addresses from an old system that didn’t check for properly formatted emails on its user fields. These errors would produce faulty data in the new Drupal site. I preprocessed the field data for the most common mistakes which reduced the amount of erroneous addresses brought over.

    Categories: FLOSS Project Planets

    Michal Čihař: Gammu 1.38.1

    Planet Debian - Thu, 2017-01-05 09:00

    Today Gammu 1.38.1 has been released. This is bugfix release fixing several minor bugs which were discovered in 1.38.0.

    The Windows binaries will be available shortly. These are built using AppVeyor and will help bring Windows users back to latest versions.

    Full list of changes and new features can be found on Gammu 1.38.1 release page.

    Would you like to see more features in Gammu? You an support further Gammu development at Bountysource salt or by direct donation.

    Filed under: Debian English Gammu | 0 comments

    Categories: FLOSS Project Planets

    Yoong Kang Lim: Event sourcing in Django

    Planet Python - Thu, 2017-01-05 08:17

    Django comes with "batteries included" to make CRUD (create, read, update, delete) operations easy. It's nice that the CR part (create and read) of CRUD is so easy, but have you ever paused to think about the UD part (update and delete)?

    Let's look at delete. All you need to do is this:

    ReallyImportantModel.objects.get(id=32).delete() # gone from the database forever

    Just one line, and your data is gone forever. It can be done accidentally. Or you can be do it deliberately, only to later realise that your old data is valuable too.

    Now what about updating?

    Updating is deleting in disguise.

    When you update, you're deleting the old data and replacing it with something new. It's still deletion.

    important = ReallyImportantModel.object.get(id=32) important.update(data={'new_data': 'This is new data'}) # OLD DATA GONE FOREVER

    Okay, but why do we care?

    Let's say we want to know the state of ReallyImportantModel 6 months ago. Oh that's right, you've deleted it, so you can't get it back.

    Well, that's not exactly true -- you can recreate your data from backups (if you don't backup your database, stop reading right now and fix that immediately). But that's clumsy.

    So by only storing the current state of the object, you lose all the contextual information on how the object arrived at this current state. Not only that, you make it difficult to make projections about the future.

    Event sourcing 1 can help with that.

    Event sourcing

    The basic concept of event sourcing is this:

    • Instead of just storing the current state, we also store the events that lead up to the current state
    • Events are replayable. We can travel back in time to any point by replaying every event up to that point in time
    • That also means we can recover the current state just by replaying every event, even if the current state was accidentally deleted
    • Events are append-only.

    To gain an intuition, let's look at an event sourcing system you're familiar with: your bank account.

    Your "state" is your account balance, while your "events" are your transactions (deposit, withdrawal, etc.).

    Can you imagine a bank account that only shows you the current balance?

    That is clearly unacceptable ("Why do I only have $50? Where did my money go? If only I could see the the history."). So we always store the history of transfers as the source of truth.

    Implementing event sourcing in Django

    Let's look at a few ways to do this in Django.

    Ad-hoc models

    If you have a one or two important models, you probably don't need a generalizable event sourcing solution that applies to all models.

    You could do it on an ad-hoc basis like this, if you can have a relationship that makes sense:

    # in an app called 'account' from django.db import models from django.conf import settings class Account(models.Model): """Bank account""" balance = models.DecimalField(max_digits=19, decimal_places=6) owner = models.ForeignKey(settings.AUTH_USER_MODEL, related_name='account') class Transfer(models.Model): """ Represents a transfer in or out of an account. A positive amount indicates that it is a transfer into the account, whereas a negative amount indicates that it is a transfer out of the account. """ account = models.ForeignKey('account.Account', on_delete=models.PROTECT, related_name='transfers') amount = models.DecimalField(max_digits=19, decimal_places=6) date = models.DateTimeField()

    In this case your "state" is in your Account model, whereas your Transfer model contains the "events".

    Having Transfer objects makes it trivial to recreate any account.

    Using an Event Store

    You could also use a single Event model to store every possible event in any model. A nice way to do this is to encode the changes in a JSON field.

    This example uses Postgres:

    from django.contrib.contenttypes.fields import GenericForeignKey from django.contrib.contenttypes.models import ContentType from django.contrib.postgres.fields import JSONField from django.db import models class Event(models.Model): """Event table that stores all model changes""" content_type = models.ForeignKey(ContentType, on_delete=models.PROTECT) object_id = models.PositiveIntegerField() time_created = models.DateTimeField() content_object = GenericForeignKey('content_type', 'object_id') body = JSONField()

    You can then add methods to any model that mutates the state:

    class Account(models.Model): balance = models.DecimalField(max_digits=19, decimal_places=6, default=0 owner = models.ForeignKey(settings.AUTH_USER_MODEL, related_name='account') def make_deposit(self, amount): """Deposit money into account""" Event.objects.create( content_object=self, time_created=timezone.now(), body=json.dumps({ 'type': 'made_deposit', 'amount': amount, }) ) self.balance += amount self.save() def make_withdrawal(self, amount): """Withdraw money from account""" Event.objects.create( content_object=self, time_created=timezone.now(), body=json.dumps({ 'type': 'made_withdrawal', 'amount': -amount, # withdraw = negative amount }) ) self.balance -= amount self.save() @classmethod def create_account(cls, owner): """Create an account""" account = cls.objects.create(owner=owner, balance=0) Event.objects.create( content_object=account, time_created=timezone.now(), body=json.dumps({ 'type': 'created_account', 'id': account.id, 'owner_id': owner.id }) ) return account

    So now you can do this:

    account = Account.create_account(owner=User.objects.first()) account.make_deposit(decimal.Decimal(50.0)) account.make_deposit(decimal.Decimal(125.0)) account.make_withdrawal(decimal.Decimal(75.0)) events = Event.objects.filter( content_type=ContentType.objects.get_for_model(account), object_id=account.id ) for event in events: print(event.body)

    Which should give you this:

    {"type": "created_account", "id": 2, "owner_id": 1} {"type": "made_deposit", "amount": 50.0} {"type": "made_deposit", "amount": 125.0} {"type": "made_withdrawal", "amount": -75}

    Again, this makes it trivial to write any utility methods to recreate any instance of Account, even if you accidentally dropped the whole accounts table.

    Snapshotting

    There will come a time when you have too many events to efficiently replay the entire history. In this case, a good optimisation step would be snapshots taken at various points in history. For example, in our accounting example one could save snapshots of the account in an AccountBalance model, which is a snapshot of the account's state at a point in time.

    You could do this via a scheduled task. Celery 2 is a good option.

    Summary

    Use event sourcing to maintain an append-only list of events for your critical data. This effectively allows you to travel in time to any point in history to see the state of your data at that time.

    UPDATE: If you want to see an example repo, feel free to take a look here: https://github.com/yoongkang/event_sourcing_example

    1. Martin Fowler wrote a detailed description of event sourcing in his website here: http://martinfowler.com/eaaDev/EventSourcing.html 

    2. Celery project. http://www.celeryproject.org/ 

    Categories: FLOSS Project Planets
    Syndicate content