FLOSS Project Planets

2bits: Backdrop: an alternative Drupal fork

Planet Drupal - Thu, 2015-07-02 23:16
Last week, at the amazing Drupal North regional conference, I gave a talk on Backdrop: an alternative fork of Drupal. The slides from the talk are attached below, in PDF format.
Categories: FLOSS Project Planets

Drupal core announcements: Portsmouth NH theme system critical sprint recap

Planet Drupal - Thu, 2015-07-02 23:16

In early June a Drupal 8 theme system critical issues sprint was held in Portsmouth, New Hampshire as part of the D8 Accelerate program.

The sprint started the afternoon of June 5 and continued until midday June 7.

Sprint goals

We set out to move forward the two (at the time) theme system criticals, #2273925: Ensure #markup is XSS escaped in Renderer::doRender (created May 24, 2014) and #2280965: [meta] Remove or document every SafeMarkup::set() call (created June 6, 2014).

Sponsors

The Drupal Association provided the D8 Accelerate grant which covered travel costs for joelpittet and Cottser.

Bowst provided the sprint space.

As part of its NHDevDays series of contribution sprints, the New Hampshire Drupal Group provided snacks and refreshments during the sprint, lunch and even dinner on Saturday.

Digital Echidna provided time off for Cottser.

Summary

xjm committed #2273925: Ensure #markup is XSS escaped in Renderer::doRender Sunday afternoon! xjm’s tweet sums things up nicely.

As for the meta (which is comprised of about 50 sub-issues), by the end of the sprint we had patches on over 30 of them, 3 had been committed, and 7 were in the RTBC queue.

Thanks to the continued momentum provided by the New Jersey sprint, as of this writing approximately 20 issues from the meta issue have been resolved.

Friday afternoon

peezy kicked things off with a brief welcome and acknowledgements. joelpittet and Cottser gave an informal introduction to the concepts and tasks at hand for the sprinters attending.

After that, leslieg on-boarded our Friday sprinters (mostly new contributors), getting them set up with Drupal 8, IRC, Dreditor, and so on. leslieg and a few others then went to work reviewing documentation around #2494297: [no patch] Consolidate change records relating to safe markup and filtering/escaping to ensure cross references exist.

Meanwhile in "critical central" (what we called the meeting room where the work on the critical issues was happening)…

lokapujya and joelpittet got to work on the remaining tasks of #2273925: Ensure #markup is XSS escaped in Renderer::doRender.

cwells and Cottser started the work on removing calls to SafeMarkup::set() by working on #2501319: Remove SafeMarkup::set in _drupal_log_error, DefaultExceptionSubscriber::onHtml, Error::renderExceptionSafe.

Thai food was ordered in, and many of us continued working on issues late into the evening.

Saturday

joelpittet and Cottser gave another brief introduction to keep new arrivals on the same page and reassert concepts from the day before.

leslieg did some more great on-boarding Saturday and worked with a handful of new contributors on implementing #2494297: [no patch] Consolidate change records relating to safe markup and filtering/escaping to ensure cross references exist. The idea was that by reviewing and working on this documentation the contributors would be better equipped to work directly on the issues in the SafeMarkup::set() meta.

Mid-morning Cottser led a participatory demo with the whole group of a dozen or so sprinters, going through one of the child issues of the meta and ending up with a patch. This allowed us to walk through the whole process and think out loud the whole time.


The Benjamin Melançon XSS attack in action. Having some fun while working on our demo issue.

By this time we had identified some common patterns after working on enough of these issues.

By the end of Saturday all of the sprinters including brand new contributors were collaborating on issues from the critical meta and the issue stickies were flying around the room with fervor (a photo of said issue stickies is below).


Then we had dinner :)

Sunday morning

drupal.org was down for a while.

We largely picked up where we left off Saturday, cranked out more patches, and joelpittet and Cottser started to review the work that had been done the day before that was in the “Needs Human” column.


Our sprint board looked something like this on the last day of the sprint.

Thank you

Thanks to the organizing committee (peezy, leslieg, cwells, and kbaringer), xjm, effulgentsia, New Hampshire DUG, Seacoast DUG, Bowst, Drupal Association, Digital Echidna, and all of our sprinters: cdulude, Cottser, cwells, Daniel_Rose, dtraft, jbradley428, joelpittet, kay_v, kbaringer, kfriend, leslieg, lokapujya, mlncn, peezy, sclapp, tetranz.

AttachmentSize 20150606_114556.jpg453.91 KB 20150606_184029.jpg549.46 KB 20150607_130317.jpg353.87 KB IMG_8589.JPG781.47 KB
Categories: FLOSS Project Planets

Sumith: GSoC Progress - Week 6

Planet Python - Thu, 2015-07-02 20:00

Hello, received a mail few minutes into typing this, passed the midterm review successfully :)
Just left me wondering how do these guys process so many evaluations so quickly.
I do have to confirm with Ondřej about this.
Anyways, the project goes on and here is my this week's summary.

Progress

SymEngine successfully moved to using Catch as a testing framework.

The travis builds for clang were breaking, this let me to play around with travis and clang builds to fix this issue. The linux clang build used to break because we used to mix-up and link libraries like GMP compiled with different standard libraries.
Thanks to Isuru for lending a helping hand and fixing it in his PR.

Next task to make SYMENGINE_ASSERT not use standard assert(), hence I wrote my custom assert which simulates the internal assert.
Now we could add the DNDEBUG as a release flag when Piranha is a dependence, this was also done.

Started work on Expression wrapper, PR that starts off from Francesco's work sent in.

Investigated the slow down in benchmarks that I have been reporting in the last couple of posts. Using git commit(amazing tool, good to see binary search in action!), the first bad commit was tracked. We realized that the inclusion of piranha.hpp header caused the slowdown and was resolved by using mp_integer.hpp, just the requirement header.
With immense help of Franceso, the problem was cornered to this:
* Inclusion of thread_pool leads to the slowdown, a global variable that it declares to be specific.
* In general a multi-threaded application may result in some compiler optimizations going off, hence slowdown.
* Since this benchmark is memory allocation intensive, another speculation is that compiler allocates memory differently.

This SO question asked by @bluescarni should lead to very interesting developments.

We have to investigate this problem and get it sorted. Not only because we depend on Piranha, we might also have multi-threading in SymEngine later too.

Report

No benchmarking was done this week.
Here is my PR reports.

WIP
* #500 - Expression Wrapper

Merged
* #493 - The PR with Catch got merged.
* #498 - Made SYMENGINE_ASSERT use custom assert instead of assert() and DNDEBUG as a release flag with PIRANHA.
* #502 - Make poly_mul used mpz_addmul (FMA), nice speedup of expand2b. * #496 - En route to fixing SYMENGINE_ASSERT led to a minor fix in one of the assert statements.
* #491 - Minor fix in compiler choice documentation.

Targets for Week 6
  • Get the Expression class merged.
  • Investigate and fix the slow-downs.

The rest of tasks can be finalized in later discussion with Ondřej.

That's all this week.
Ciao

Categories: FLOSS Project Planets

Pointing devices KCM: update #2

Planet KDE - Thu, 2015-07-02 19:00

For general information about the project, look at this post

Originally I planned to work on the KCM UI at this time. But as I am unsure how it should look like, I started a discussion on VDG forum, and decided to switch to other tasks.

Currently the KCM looks like this:

Don’t worry, it isn’t the final UI, just a minimal demo :)

KDED module is, I think, almost complete. It can apply settings from configuration file, and has a method exported to D-Bus to reload configuration for all devices or for some specific device. Of course, it also applies settings immediately when device is plugged in. The only thing that is missing is auto-disabling of some devices (like disable touchpad when there’s an external mouse).

As usual, here is a link to the repository.

Also, I started working on a D-Bus API for KWin. The API will expose most of libinput configuration settings. Currently it lists all available devices, some of their most important read-only properties (like name, hardware ids, capabilities), and allows to enable/disable tap-to-click as an example of writable property. As I already said, KCM isn’t yet ready, but I was able to enable tap-to-click on my touchpad using qdbusviewer.

My kwin repo clone is here, branch libinput-dbusconfig

Categories: FLOSS Project Planets

Enrico Zini: italian-fattura-elettronica

Planet Debian - Thu, 2015-07-02 17:48
Billing an Italian public administration

Here's a simple guide for how I managed to bill one of my customers as is now mandated by law in Italy.

Create a new virtualbox machine

I would never do any of this to any system I would ever want to use for anything else, so it's virtual machine time.

  • I started virtualbox, created a new machine for Ubuntu 32bit, 8Gb disk, 4Gb RAM, and placed the .vdi image in an encrypted partition. The web services of Infocert's fattura-pa requires "Java (JRE) a 32bit di versione 1.6 o superiore".
  • I installed Ubuntu 12.04 on it: that is what dike declares to support.
  • I booted the VM, installed virtualbox-guest-utils, and de sure I also had virtualbox-guest-x11
  • I restarted the VM so that I could resize the virtualbox window and have Ubuntu resize itself as well. Now I could actually read popup error messages in full.
  • I changed the desktop background to something that gave me the idea that this is an untrusted machine where I need to be very careful of what I type. I went for bright red.
Install smart card software into it
  • apt-get install pcscd pcsc-tools opensc
  • In virtualbox, I went to Devices/USB devices and enabled the smart card reader in the virtual machine.
  • I ran pcsc_scan to see if it could see my smart card.
  • I ran Firefox, went to preferences, advanced, security devices, load. Module name is "CRS PKCS#11", module path is /usr/lib/opensc-pkcs11.so
  • I went to https://fattura-pa.infocamere.it/fpmi/service and I was able to log in. To log in, I had to type the PIN 4 times into popups that offered little explanations about what was going on, enjoying cold shivers because the smart card would lock itself at the 3rd failed attempt.
  • Congratulations to myself! I thought that all was set, but unfortunately, at this stage, I was not able to do anything else except log into the website.
Descent into darkness Set up things for fattura-pa
  • I got the PDF with the setup instructions from here. Get it too, for a reference, a laugh, and in case you do not believe the instructions below.
  • I went to https://www.firma.infocert.it/installazione/certificato.php, and saved the two certificates.
  • Firefox, preferences, advanced, show certificates, I imported both CA certificates, trusted for everything, all my base are belong to them.
  • apt-get install icedtea-plugin
  • I went to https://fattura-pa.infocamere.it/fpmi/service and tried to sign. I could not: I got an error about invalid UTF8 for something or other in Firefox's stdandard error. Firefox froze and had to be killed.
Set up things for signing locally with dike
  • I removed icedtea so that I could use the site without firefox crashing.
  • I installed DiKe For Ubuntu 12.04 32bit
  • I ran dikeutil to see if it could talk to my smart card
  • When signing with the website, I chose the manual signing options and downloaded the zip file with the xml to be signed.
  • I got a zip file, unzipped it.
  • I loaded the xml into dike.
  • I signed it with dike.
  • I got this error message: "nessun certificato di firma presente sul dispositivo di firma" and then this error message: "Impossibile recuperare il certificato dal dispositivo di firma". No luck.
Set up things for signing locally with ArubaSign
  • I went to https://www.pec.it/Download.aspx
  • I downloaded ArubaSign for Linux 32 bit.
  • Oh! People say that it only works with Oracle's version of Java.
  • sudo add-apt-repository ppa:webupd8team/java
  • apt-get update
  • apt-get install oracle-java7-installer
  • During the installation process I had to agree to also sell my soul to Oracle.
  • tar axf ArubaSign*.tar*
  • cd ArubaSing-*/apps/dist
  • java -jar ArubaSign.jar
  • I let it download its own updates. Another time I did not. It does not seem to matter: I get asked that question every time I start it anyway.
  • I enjoyed the fancy brushed metal theme, and had an interesting time navigating an interface where every label on every icon or input field was truncated.
  • I downloaded https://www.pec.it/documenti/Manuale_ArubaSign2_firma%20Remota_V03_02_07_2012.pdf to get screenshots of that interface with all the labels intact
  • I signed the xml that I got from the website. I got told that I needed to really view carefully what I was signing, because the signature would be legally binding
  • I enjoyed carefully reading a legally binding, raw XML file.
  • I told it to go ahead, and there was now a .p7m file ready for me. I rejoiced, as now I might, just might actually get paid for my work.
Try fattura-pa again

Maybe fattura-pa would work with Oracle's Java plugin?

  • I went to https://fattura-pa.infocamere.it/fpmi/service
  • I got asked to verify java at www.java.com. I did it.
  • I told FireFox to enable java.
  • Suddenly, and while I was still in java.com's tab, I got prompted about allowing Infocert's applet to run: I allowed it to run.
  • I also got prompted several times, still while the current tab was not even Infocert's tab, about running components that could compromise the security of my system. I allowed and unblocked all of them.
  • I entered my PIN.
  • Congratulations! Now I have two ways of generating legally binding signatures with government issued smart cards!
Aftermath

I shut down that virtual machine and I'm making sure I never run anything important on it. Except, of course, generating legally binding signatures as required by the Italian government.

What could possibly go wrong?

Categories: FLOSS Project Planets

James Mills: A Docker-based mini-PaaS

Planet Python - Thu, 2015-07-02 17:20
The Why

So by now everyone has heard of Docker right? (If not, you have some catching up to do!)

Why have I created this mini-PaaS based around Docker? What's wrong with the many myriad of platforms and services out there:

Well. Nothing! The various platforms, services and stacks that exist to service, deploy, monitor applications using Docker all have their use-cases and pros and cons.

If you call the post Flynn vs. Deis: The Tale of Two Docker Micro-PaaS Technologies i said the following about ~10months ago.

I've stuck by this and 10 months later here it is.

docker-compose.yml:

autodock: image: prologic/autodock ports: - "1338:1338/udp" - "1338:1338/tcp" volumes: - /var/run/docker.sock:/var/run/docker.sock autodocklogger: image: prologic/autodock-logger links: - autodock autodockhipache: image: prologic/autodock-hipache links: - autodock - hipache:redis hipache: image: hipache ports: - 80:80 - 443:443

Gist here: https://gist.github.com/prologic/72ca4076a63d5dd1687d

This uses the following tools and software (all which I wrote as well):

Now. Here's the thing. Nothing here is particularly fancy.

  • There's no DNS management
  • There's no fancy services to speak of
  • There's no web interface at all.
  • There's no API or even a CLI tool

So what is there?

Basically this works in a very simple way.

  1. Setup a wildcard A record on a domain pointing it at your Docker host.
  2. Spin up containers with the -e VIRTUALHOST environment variable.

That's it!

The How

How this works:

  • autodock is a daemon that listens for Docker events via the Docker Remote API
  • autodock is pluggable and provides a UDP-based distributed interface to other plugins.
  • When autodock sees a Docker event it broadcasts it to all nodes.
  • When autodock-hipache sees container start/stop/died/killed/paused/unpaused events it: - Checks for a VIRTUALHOST environment variable. - Checks for a PORT environment variable (optional, default: ``80``). - Checks the configuration of the container for exposed ports. - If PORT is a valid exposed port; reconfigure hipache with the provided virtualhost from the VIRTUALHOST environment variable routing web requests to the container's ip address and port given by PORT.
Usage

Using this is quite simple. Copy the above docker-compose.yml and run:

docker-compose up -d

Then start a container:

docker run -d -e VIRTUALHOST=hello.local prologic/hello

And visit: http://hello.local

Assuming (of course) hello.local points to your Docker host in /etc/hosts.

Of course real deployments of this will use real domains and a real DNS server.

Two example deployments of this can be seen here:

Enjoy! :)

Update: I have now created a new project/tool to help facilitate the setup of this little minimal PaaS. Check out autodock-paas!

Categories: FLOSS Project Planets

Fiber UI Experiments – Conclusion?

Planet KDE - Thu, 2015-07-02 17:13

It’s been one heckuva road, but I think the dust is starting to settle on the UI design for Fiber, a new web browser which I’m developing for KDE. After some back-and fourth from previous revisions, there are some exciting new ideas in this iteration! Please note that this post is about design experiments – the development status of the browser is still very low-level and won’t reach the UI stage for some time. These experiments are being done now so I can better understand the structure of the browser as I program around a heavily extension-based UI, so when I do solidify the APIs it we have a rock-solid foundation.

Just as an aside before I get started; just about any time I mention “QML”, there is the possible chance that whatever is being driven could also alternatively use HTML. I’m looking into this, but make no guarantees.

As a recap to previous experiments, one of the biggest things that became very clear from feedback was that the address bar isn’t going away and I’m not going to hide it. I was a sad panda, but there are important things the address bar provides which I just couldn’t work around. Luckily, I found some ways to improve upon the existing address bar ideology via aggressive use of extensions, and slightly different usage compared to how contemporary browsers embed extensions into the input field – so lets take a look at the current designs;


By default, Fiber will have either “Tabs on Side” or “Tabs on Bottom”; this hasn’t been decided yet, but there will also have a “Tabs on Top” option (which I have decided will not be default for a few reasons). Gone is the search box as it was in previous attempts – replaced with a proper address bar which I’m calling “Multitool” – and here’s more about it why I’m a little excited;

Multitool

Fiber is going to be an extensions-based browser. Almost everything will be an extension, from basic navigational elements (back, forward), to the bookmarks system – and all will either disable-able or replaceable. This means every button, every option, every utility will be configurable. I’ve studied how other browsers embed extensions in the address bar, and none of them really integrate with it in a meaningful and clearly defined way. Multitool is instead getting a well-defined interface for extensions which make use of the bar;

Extensions which have searchable or traversable content will be candidates for extending into the Multitool, which includes URL entry, search, history, bookmarks, downloads, and other things. Since these are extensions with a well-defined API you will be able to ruthlessly configure what you want or don’t want to show up, and only the URL entry will be set in stone. Multitool extensions will have 3 modes which you can pick from: background, button, and separate.

Background extensions will simply provide additional results when typing into the address bar. By default, this will be the behaviours of things like current tabs, history, and shortcut-enabled search. Button extensions in mutitool will provide a clickable option which will take over the Multitool when clicked, offering a focused text input and an optional QML-based “home popout”. Lastly, “separateextensions will split the text input offering something similar to a separate text widget – only integrated into the address bar.

The modes and buttons will be easily configurable, and so you can choose to have extensions simply be active in the background, or you could turn on the buttons, or disable them entirely. Think of this as applying KRunner logic to a browser address bar, only with the additional ability to perform “focused searches”.

Shown on the right side of the Multitool are the two extensions with dedicated buttons; bookmarks and search, which will be the default rollout. When you click on those embedded buttons they will take over the address bar and you may begin your search. These tools will also be able to specify an optional QML file for their “home” popout. For example the Bookmarks home popout could be a speed-dial UI, History could be a time-machine-esque scrollthrough. Seen above is a speed dial popout. With Bookmarks and Search being in button mode by default, just about everything else that performs local searches will be in “background mode”, except keyword-based searches which will be enabled – but will require configuration. Generally, the address portion of Multitool will NOT out-of-box beam what you type to a 3rd party, but the search extension will. I have not selected search providers.

We also get a two-for-one deal for fast filtering, since the user is already aware they have clicked on a text entry. Once you pick a selection from a focused search or cancel, the bar will snap back into address mode. If you hit “enter” while doing a focused search, it will simply open a tab with the results of that search.

Aside from buttons, all the protocol and security information relevant to the page (the highlighted areas on the left) will also be extension-driven. Ideally, this will let you highly customise what warnings you get, and will also let extensions tie any content-altering behaviour into proper warnings. For example, the ad-blocker may broadcast the number of zapped ads. When clicked the extensions will us QML-driven popouts.

Finally, the address itself (and any focused extension searches) will have extension-driven syntax highlighting. Right now I’m thinking of using a monospace font so we can drive things like bold fonts without offsetting text.

Tabs

Tab placement was a big deal to people; some loved the single-row approach, others wanted a more traditional layout. The solution to the commotion was the fact that there isn’t a single solution. Tabs will have previews and simple information (as seen in the previous round of designs), so by default tabs will be on the bottom or side so the previews don’t obstruct unnecessary amounts of UI.

Fiber will have 3 tabbing options; Tabs on top, tabs on bottom, and tabs on side. When tabs are “on side” it will reduce the UI to one toolbar and place the tabs on the same row as the Multitool, and should also trigger a “compressed” layout for Multitool as well.

There will be the usual “app tab” support of pinning tabs, but not shown here will be tab-extensions. Tab extensions will behave like either app tabs or traditional tabs, and will be QML-powered pages from extensions. These special tabs will also be home-screen or new-tab options, and that is, largely, their purpose; but clever developers may find a use in having extension-based pages.

Tabs can also embed simple toggle-buttons, as usual, powered by extensions. Main candidates for these will be mute buttons or reader-mode buttons. There won’t be much to these buttons, but they will be content-sensitive and extensions will be required to provide the logic for when these buttons should be shown. For example, “reader mode” won’t be shown on pages without articles, and “mute” won’t be shown on pages without sound.

Current Progress

The current focus in Fiber is Profiles, Manifest files, and startup. Profiles will be the same as Firefox profiles, where you can have separate profiles with separate configurations. When in an activities-enabled environment, Fiber Profiles will attempt to keep in sync with the current activity – otherwise they will fall back to having users open a profile tool.

The manifest files are a big deal, since they define how extensions will interact with the browser. Fiber manifest files were origionally based on a slimmed down Chrome manifest with more “Qt-ish” syntax (like CamelCase); but with the more extensive extension plans and placement options there’s more going on with interaction points. There’s a decent manifest class, and it provides a reliable interface to read from, including things like providing missing defaults and offering some debugging info which will be used in Fibers extension development tools.I’m using DBus for Fiber to check a few things on startup; Fiber will be a “kind of” single-instance application, but individual profiles will be separate processes. DBus is being used to speak with running instances to figure out what it should do. The idea behind this setup is to keep instances on separate activities from spiking eachother, but to still allow the easier communication between windows of a single instance – this should help things like tab dragging between windows immensely. It also gives the benefit that you could run “unstable” extensions in a separate instance, which will be good for development purposes.I wish I could say development is going quickly, but right now my time is a bit crunched; either way things are going smoothly, and I’d rather be slow and steady than fast and sloppy.Development builds will be released in the future (still a long ways away) which I’ll be calling “Copper” builds. Copper builds will mostly be a rough and dirty way for me to test UI, and will not be stable or robust browsers. Mostly, it’ll be for the purpose of identifying annoying UI patterns and nipping them before they get written into extensions.


Categories: FLOSS Project Planets

FSF Blogs: Friday Free Software Directory IRC meetup: July 3

GNU Planet! - Thu, 2015-07-02 16:52

Join the FSF and friends Friday, July 3, from 2pm to 5pm EDT (18:00 to 21:00 UTC) to help improve the Free Software Directory by adding new entries and updating existing ones. We will be on IRC in the #fsf channel on freenode.

Tens of thousands of people visit directory.fsf.org each month to discover free software. Each entry in the Directory contains a wealth of useful information, from basic category and descriptions, to providing detailed info about version control, IRC channels, documentation, and licensing info that has been carefully checked by FSF staff and trained volunteers.

While the Free Software Directory has been and continues to be a great resource to the world over the past decade, it has the potential of being a resource of even greater value. But it needs your help!

If you are eager to help and you can't wait or are simply unable to make it onto IRC on Friday, our participation guide will provide you with all the information you need to get started on helping the Directory today!

Categories: FLOSS Project Planets

Friday Free Software Directory IRC meetup: July 3

FSF Blogs - Thu, 2015-07-02 16:52

Join the FSF and friends Friday, July 3, from 2pm to 5pm EDT (18:00 to 21:00 UTC) to help improve the Free Software Directory by adding new entries and updating existing ones. We will be on IRC in the #fsf channel on freenode.

Tens of thousands of people visit directory.fsf.org each month to discover free software. Each entry in the Directory contains a wealth of useful information, from basic category and descriptions, to providing detailed info about version control, IRC channels, documentation, and licensing info that has been carefully checked by FSF staff and trained volunteers.

While the Free Software Directory has been and continues to be a great resource to the world over the past decade, it has the potential of being a resource of even greater value. But it needs your help!

If you are eager to help and you can't wait or are simply unable to make it onto IRC on Friday, our participation guide will provide you with all the information you need to get started on helping the Directory today!

Categories: FLOSS Project Planets

Antonio Terceiro: Upgrades to Jessie, Ruby 2.2 transition, and chef update

Planet Debian - Thu, 2015-07-02 16:26

Last month I started to track all the small Debian-related things that I do. My initial motivation was to be concious about how often I spend short periods of time working on Debian. Sometimes it’s during lunch breaks, weekends, first thing in the morning before regular work, after I am done for the day with regular work, or even during regular work, since I do have the chance of doing Debian work as part of my regular work occasionally.

Now that I have this information, I need to do something with it. So this is probably the first of monthly updates I will post about my Debian work. Hopefully it won’t be the last.

Upgrades to Jessie

I (finally) upgraded my two servers to Jessie. The first one, my home server, is a Utilite which is a quite nice ARM box. It is silent and consumes very little power. The only problem I had with it is that the vendor-provided kernel is too old, so I couldn’t upgrade udev, and therefore couldn’t switch to systemd. I had to force systemv for now, until I can manage to upgrade the kernel and configure uboot to properly boot the official Debian kernel.

On my VPS things are way better. I was able to upgrade nicely, and it is now running a stock Jessie system.

fixed https on ci.debian.net

pabs had let me know on IRC of an issue with the TLS certificate for ci.debian.net, which took me a few iterations to get right. It was missing the intermediate certificates, and is now fixed. You can now enjoy Debian CI under https .

Ruby 2.2 transition

I was able to start the Ruby 2.2 transition, which has the goal of switch to Ruby 2.2 on unstable. The first step was updating ruby-defaults adding support to build Ruby packgaes for both Ruby 2.1 and Ruby 2.2. This was followed by updates to gem2deb (0.18, 0.18.1, 0.18.2, and 0.18.3) and rubygems-integration . At this point, after a few rebuild requests only 50 out of 137 packages need to be looked at; some of them just use the default Ruby, so a rebuild once we switch the default will be enough to make it use Ruby 2.2, while others, specially Ruby libraries, will still need porting work or other fixes.

Updated the Chef stack

Bringing chef to the very latest upstream release into unstable was quite some work.

I had to update:

  • ruby-columnize (0.9.0-1)
  • ruby-mime-types (2.6.1-1)
  • ruby-mixlib-log 1.6.0-1
  • ruby-mixlib-shellout (2.1.0-1)
  • ruby-mixlib-cli (1.5.0-1)
  • ruby-mixlib-config (2.2.1-1)
  • ruby-mixlib-authentication (1.3.0-2)
  • ohai (8.4.0-1)
  • chef-zero (4.2.2-1)
  • ruby-specinfra (2.35.1-1)
  • ruby-serverspec (2.18.0-1)
  • chef (12.3.0-1)
  • ruby-highline (1.7.2-1)
  • ruby-safe-yaml (1.0.4-1)

In the middle I also had to package a new dependency, ruby-ffi-yajl, which was very quickly ACCEPTED thanks to the awesome work of the ftp-master team.

Random bits

  • Sponsored a upload of redir by Lucas Kanashiro
  • chake, a tool that I wrote for managing servers with chef but without a central chef server, got ACCEPTED into the official Debian archive.
  • vagrant-lxc , a vagrant plugin for using lxc as backend and lxc containters as development environments, was also ACCEPTED into unstable.
  • I got the deprecated ruby-rack1.4 package removed from Debian
Categories: FLOSS Project Planets

Joining the press – Which topic would you like to read about?

Planet KDE - Thu, 2015-07-02 16:26

When I saw an ad on Linux Veda that they are looking for new contributors to their site, I thought “Hey, why shouldn’t I write for them?”. Linux Veda (formerly muktware) is a site that offers Free Software news, how-tos, opinions, reviews and interviews. Since its founder Swapnil Bhartiya is personally a big KDE fan, the site has a track record of covering our software extensively in its news and reviews, and has already worked with us in the past to make sure their articles about our software or community were factually correct (and yes, it was only fact-checking, we never redacted their articles or anything).

Therefore, I thought that a closer collaboration with Linux Veda could be mutually beneficial: Getting exclusive insights directly from a core KDE contributor could give their popularity an additional boost, while my articles could get an extended audience including people who are currently interested in Linux and FOSS, but not necessarily too much interested in KDE yet.

I asked Swapnil if I could write for him. He said it would be an honor to work with me, which I must admit made me feel a little flattered. So I joined Linux Veda as a freelance contributor-

My first article actually isn’t about anything KDE-related, but a how-to for getting 1080p videos to work in Youtube’s HTML5 player in Firefox on Linux, mainly because I had just explained it to someone and felt it might benefit others as well if I wrote it up.

In the future you will mainly read articles about KDE-related topics from me there. Since I’m not sure which topics people would be most interested in, I thought I’d ask you, my dear readers. You can choose between three topics that came to my mind which one I should write about, or add your own ideas. I’m excited which topic will win!

Take Our Poll

Of course this doesn’t mean I won’t write anything in my blog here anymore. I’ll decide on a case-by-case basis if an article would make more sense here or over at Linux Veda. I hope you’ll find my articles there interesting and also read some of the other things they have on offer, you’ll find many well-written and interesting articles there!


Filed under: KDE
Categories: FLOSS Project Planets

FSF Blogs: Teaching Email Self-Defense: Campaigns intern leads a workshop at PorcFest

GNU Planet! - Thu, 2015-07-02 16:17

My workshop on Email Self-Defense took place at the 12th annual Porcupine Freedom Festival in Lancaster, New Hampshire. Around eight people attended, which was a few more than I expected. Christopher Waid and Bob Call of ThinkPenguin joined me in helping everyone who brought a laptop to set up GnuPG properly. Those who didn't bring a laptop participated by observing the process on the system most similar to their own and asking questions about particular steps, so as to enable them to achieve the same configuration when they returned home.

The workshop was part of the Alternatives Exposition, which brings together people with a broad range of interests with the intention of helping strengthen community and build alternative infrastructure. Free software supporter François-René Rideau also participated in the AltExpo, delivering an excellent talk titled Who Controls Your Computer? which swayed many attendees to investigate the benefits of free software. Thank you, François!

Although I attended PorcFest as an individual (as opposed to a representative of the FSF), I encouraged everyone I talked with to switch to free software. Without freedom, security and privacy are unrealistic goals.

I'm back at the Free Software Foundation now, and I'm processing my experiences. The workshop went well, and provided me with insights on how to better organize similar workshops in the future. Right now, I'm formalizing those insights into a Facilitating Email Self-Defense Workshops guide, to be published within the next two months.

In the meantime, I'd like to ask all of you: what unusual and extraordinary people do you know about who use GnuPG? I'm going to include a list of inspiring people who use GnuPG in my guide, and I'd love hear your suggestions! Send them to campaigns@fsf.org or, for encrypted messages, adaml@fsf.org with the GnuPG key fingerprint 9D7E D11A F670 9719 F854 A307 198C 9A1E 9309 EF0C.

I'm a campaigns intern at the FSF -- learn more about our internships.

Categories: FLOSS Project Planets

Teaching Email Self-Defense: Campaigns intern leads a workshop at PorcFest

FSF Blogs - Thu, 2015-07-02 16:17

My workshop on Email Self-Defense took place at the 12th annual Porcupine Freedom Festival in Lancaster, New Hampshire. Around eight people attended, which was a few more than I expected. Christopher Waid and Bob Call of ThinkPenguin joined me in helping everyone who brought a laptop to set up GnuPG properly. Those who didn't bring a laptop participated by observing the process on the system most similar to their own and asking questions about particular steps, so as to enable them to achieve the same configuration when they returned home.

The workshop was part of the Alternatives Exposition, which brings together people with a broad range of interests with the intention of helping strengthen community and build alternative infrastructure. Free software supporter François-René Rideau also participated in the AltExpo, delivering an excellent talk titled Who Controls Your Computer? which swayed many attendees to investigate the benefits of free software. Thank you, François!

Although I attended PorcFest as an individual (as opposed to a representative of the FSF), I encouraged everyone I talked with to switch to free software. Without freedom, security and privacy are unrealistic goals.

I'm back at the Free Software Foundation now, and I'm processing my experiences. The workshop went well, and provided me with insights on how to better organize similar workshops in the future. Right now, I'm formalizing those insights into a Facilitating Email Self-Defense Workshops guide, to be published within the next two months.

In the meantime, I'd like to ask all of you: what unusual and extraordinary people do you know about who use GnuPG? I'm going to include a list of inspiring people who use GnuPG in my guide, and I'd love hear your suggestions! Send them to campaigns@fsf.org or, for encrypted messages, adaml@fsf.org with the GnuPG key fingerprint 9D7E D11A F670 9719 F854 A307 198C 9A1E 9309 EF0C.

I'm a campaigns intern at the FSF -- learn more about our internships.

Categories: FLOSS Project Planets

KDEPIM report (week 26)

Planet KDE - Thu, 2015-07-02 15:41

My focus was KAddressBook last week.

KAddressBook:

It’s not a complicated application, so I didn’t find a lot of bugs. But indeed as I maintain it I already fixed critical bugs.

Some fixes:

  • I improved gravatar support.
  • I added “Server Side Subscription” action. Before that it was necessary to go to kmail or go to resource settings and click on “ServerSide subscription” button. I was not userfriendly.
  • I continued to clean up code and I use new qt5 connect api.
  • I fixed a lot of bugs (as layout bugs etc.)

Other works in KDEPIM:

  • I fixed LDAP support which was broken when we ported it to QUrl now we can use it. It’s great
  • I fixed translation in korganizer.
  • AkonadiSearch dbus interface name was changed. So we couldn’t have email indexing information. Fixed.
  • I fixed Imap Resource interface too.
  • A lot of bugs were fixed in korganizer/kmail/sieveeditor etc.

Future:

Next week I will focus on KNotes and Kleopatra.

Other info:

Dan merged his work about replacing text protocol by a binary protocol for Akonadi. I confirm it’s faster.

He worked a lot to improve speed. Now kmail is very speed

I hope that he will add more speed patch

Sergio continued to add some optimizations in kdepimlibs/kdepim/akonadi. !!!

Categories: FLOSS Project Planets

Acquia: Front End Performance Strategy: Styles

Planet Drupal - Thu, 2015-07-02 14:38

The quest for improved page-load speed and website performance is constant. And it should be. The speed and responsiveness of a website have a significant impact on conversion, search engine optimization, and the digital experience in general.

In part one of this series, we established the importance of front-end optimization on performance, and discussed how properly-handled images can provide a significant boost toward that goal. In this second installment, we’ll continue our enhancements, this time by tackling CSS optimization.

We’ll consider general best practices from both a front-end developer’s and a themer’s point of view. Remember, as architects and developers, it’s up to us to inform stakeholders of the impacts of their choices, offer compromises where we can, and implement in smart and responsible ways.

Styles

Before we dive into optimizing our CSS, we need to understand how Drupal’s performance settings for aggregation work. We see developers treating this feature like a black box, turning it on without fully grokking its voodoo. Doing so misses two important strategic opportunities: 1. Controlling where styles are added in the head of our document, and 2. Regulating how many different aggregates are created.

Styles can belong to one of three groups:

  • System - Drupal core
  • Default - Styles added by modules
  • Theme - Styles added in your theme

Drupal aggregates styles from each group into a single sheet for that group, meaning you’ll see at minimum three CSS files being used for your page. Style sheets added by inclusion in a theme’s ‘.info’ file or a module’s ‘.info’ file automatically receive a ‘true’ value for the every_page flag in the options array, which wraps them into our big three aggregates.

Styles added using drupal_add_css automatically have the every_page flag set to ‘false.’ These style sheets are then combined separately, by group, forming special one-off aggregate style sheets for each page.

When using drupal_add_css, you can use the optional ‘options’ array to explicitly set the every_page flag to ‘true.’ You can also set the group it belongs to and give it a weight to move it up or down within a group.

<?php
drupal_add_css(drupal_get_path('module', 'custom-module') . '/css/custom-module.css', array('group' => CSS_DEFAULT, 'every_page' => TRUE));
?>

Style sheets added using Drupal’s attached property aren’t aggregated unless they have the every_page flag set to ‘true.’

Favoring every_page: true

Styles added with the every_page flag set to ‘false’ (CSS added via drupal_add_css without the true option, or without the option set at all, or added using the attached property) will only load on the pages that use the function that attaches, or adds, that style sheet to the page, so you have a smaller payload to build the page. However, on pages that do use the function that adds or attaches the style sheet with every_page: ‘false’, an additional HTTP request is required to build that page. The additional requests are for the one-off aggregates per group, per page.

In more cases than not, I prefer loading an additional 3kb of styling in my main aggregates that will be downloaded once and then cached locally, rather than create a new aggregate that triggers a separate HTTP request to make up for what’s not in the main aggregates.

Additionally, turning on aggregation causes Drupal to compress our style sheets, serving them Gzipped to the browser. Gzipped assets are 70%–90% smaller than their uncompressed counterparts. That’s a 500kb CSS file being transferred in a 100kb package to the browser.

Preprocessing isn’t a license for inefficiency

I love preprocessing my CSS. I use SASS, written in SCSS syntax, and often utilize Compass for its set of mixins and for compiling. But widespread adoption of preprocessing has led to compiled CSS files that tip the 1Mb limit (which is way over the average), or break the IE selector limit of 4095. Some of this can be attributed to Drupal’s notorious nesting of divs (ugly mark-up often leads to ugly CSS), but a lot of it is just really poor coding habits.

The number one culprit I’ve come across is over nesting of selectors in SASS files. People traverse the over nested DOM created by Drupal with SASS and spit out compiled CSS rules using descendant selectors that go five (sometimes even more) levels deep.

I took the above example from the SASS inception page, linked above, and cleaned it up to what it should probably be:

The result? It went from 755 bytes to 297 bytes, a 60% reduction in size. That came from just getting rid of the extra characters added by the excess selectors. Multiply the savings by the 30 partials or so, and it’s a pretty substantial savings in compiled CSS size.

Besides the size savings, the number and type of selectors directly impact the amount of time a browser takes to process your style rules. Browsers read styles from right to left, matching the rightmost “key” selector first, then working left, disqualifying items as it goes. Mozilla wrote a great efficient CSS reference years ago that is still relevant.

Conclusion

Once again we’ve demonstrated how sloppy front-end implementations can seriously hamper Drupal’s back-end magic. By favoring style sheet aggregation and reining in exuberant preprocessing, we can save the browser a lot of work.

In our next, and final, installment in this series, we’ll expand our front-end optimization strategies even further, to include scripts.

Tags:  acquia drupal planet
Categories: FLOSS Project Planets

Christoph Berg: PostgreSQL 9.5 in Debian

Planet Debian - Thu, 2015-07-02 14:03

Today saw the release of PostgreSQL 9.5 Alpha 1. Packages for all supported Debian and Ubuntu releases are available on apt.postgresql.org:

deb http://apt.postgresql.org/pub/repos/apt/ YOUR_RELEASE_HERE-pgdg main 9.5

The package is also waiting in NEW to be accepted for Debian experimental.

Being curious which PostgreSQL releases have been in use over time, I pulled some graphics from Debian's popularity contest data:

Before we included the PostgreSQL major version in the package name, "postgresql" contained the server, so that line represents the installation count of the pre-7.4 releases at the left end of the graph.

Interestingly, 7.4 reached its installation peak well past 8.1's. Does anyone have an idea why that happened?

Categories: FLOSS Project Planets

Daily Tech Video (Python): [Video 223] Julian Berman: Building an interpreter in RPython

Planet Python - Thu, 2015-07-02 13:00

PyPy, an alternative implementation of Python, has been gaining attention and interest over the last few years, in no small part because of its high speed. PyPy has, at its core, a small language called RPython (“restricted Python”) in which PyPy is implemented. In theory, you can use RPython to implement other languages. In this talk, Julian Berman demonstrates how to build a small language using RPython. If you’re interested in PyPy or in how programming languages work, this talk should be of interest to you.

The post [Video 223] Julian Berman: Building an interpreter in RPython appeared first on Daily Tech Video.

Categories: FLOSS Project Planets

James Mills: A Compose for Docker Machines

Planet Python - Thu, 2015-07-02 11:54

Introducing a new tool factory for composing Docker machines wrapping around docker-machine itself.

This lets you define a factory.yml file like this:

machines: test: driver: digitalocean

And run:

$ factory up

That's it! You can define as many machines as you like and the configuration options match those of docker-machine.

There are also a few other nifty commands:

  • stop -- Stop a machine or all machines
  • rm -- Remove a machine or all machines
  • ls -- List all machines

Check it out on Github at: https://github.com/prologic/factory

Enjoy! :)

Do check out my other related project autodock and autodock-paas.

Categories: FLOSS Project Planets

Hot corners fot configuring cinamon desktop

LinuxPlanet - Thu, 2015-07-02 11:48
In cinnamon GUI, we can configure the corners of our desktop to behave the way we want. That is when we move the cursor to any of the four corners of the desktop we can make the system behave in a specific way. For example every time we move the cursor to left corner all windows get minimized and the desktop is shown, or all the active windows are popped up on the screen.

This is possible by use of the Hot corners application.

In cinnamon this is available under preferences as shown below.



Once we lauch the hot corners application we will be presented with a window as shown below.



We can see there are four corners highlighted and for each corner there is separate set of activation options. for example let us say we want to view the desktop every time we move the mouse the left top corner of the desktop. In the left top corner of the hot corner window click on the menu and select the option "Show the desktop" .



There are two check boxes provided below, First one is to show an icon on the desktop which when clicked will behave as we have decided. The second checkbox when enabled will enable the selected behavior when we hover the mouse at the corner.

We can also choose to run a specific command when we move the mouse to a corner by choosing the "Run a command" option from the menu and then entering the command in the text box provided. This can be used for applications that we use very often, like libreoffice or firefox etc.


Categories: FLOSS Project Planets

Drupal Watchdog: Build it with Backdrop

Planet Drupal - Thu, 2015-07-02 11:29
Feature

Backdrop CMS is a fork of the Drupal project. Although Backdrop has different name, the 1.0 version is very similar to Drupal 7. Backdrop 1.0 provides an upgrade path from Drupal 7, and most modules or themes can be quickly ported to work on Backdrop.

Backdrop is focused on the needs of the small- to medium-sized businesses, non-profits, and those who may not be able to afford the jump from Drupal 7 to Drupal 8. Backdrop values backwards compatibility, and recognizes that the easier it is for developers to get things working, the less it will cost to build, maintain, and update comprehensive websites and web applications. By iterating code in incremental steps, innovation can happen more quickly in the areas that are most important.

The initial version of Backdrop provides major improvements over Drupal 7, including configuration management, a powerful new Layout system, and Views built-in.

What's Different From Drupal 7?

Backdrop CMS is, simply put: Drupal 7 plus built-in Configuration Management, Panels, and Views.

Solving the Deployment Dilemma

Database-driven systems have long suffered from the deployment dilemma: Once a site is live, how do you develop and deploy a new feature?

Sometimes there is only one copy of a site – the live site – and all development must be done on this live environment. But any mistakes made during the development of the new feature will be immediately experienced by visitors to the site.

It’s wiser to also have a separate development environment, which allows for creation of the new feature without putting the live site at risk. But what happens when that new feature has been completed, and needs to be deployed?

We already have a few great tools for solving this problem. When it comes to the code needed for the new feature we have great version control systems like Git. Commit all your work, merge if necessary, push when you’re happy, and then just pull from the live environment (and maybe clear some caches).

Categories: FLOSS Project Planets
Syndicate content