FLOSS Project Planets

Matt Raible: Devoxx4Kids - Denver: Introduction to Hardware Concepts with littleBits

Planet Apache - Thu, 2014-10-30 10:20
I'm pleased to announce the second meeting of the Denver Chapter of Devoxx4Kids is now open for registration. It's a two hour class titled Introduction to Hardware Concepts with littleBits and will be taught by Denver's own Tack Mobile. To learn more about littleBits, see http://littlebits.cc. If you or your company would like help by donating a Workshop Set, please contact me.

The class will be held on Saturday, November 22nd, from 10am - 12pm at Assembly Workspace. Cost is $10, but you'll get that back in the form of a t-shirt. Age requirement is 9-18 and kids should have basic computer skills (copy/paste, opening applications, etc.).

I'd like to thank Juan Sanchez for reaching out to me about this class and inspiring his company (and workspace) to make it all happen. It's been great working with you and your team Juan!

When I started Devoxx4Kids Denver, I was hoping to host a class or two per year. Our first meetup in May was a wild success. After taking the summer off to relax, I started looking for more speakers in early October. The response has been great and we'll have another class about GreenFoot on December 13th. We're even in the planning stages for another session on NAO Humanoid Robot programming in Q1 2015.

If you'd like to get involved with Denver's Devoxx4Kids, please join our meetup group.

Categories: FLOSS Project Planets

EvolvisForge blog: Tip of the day: bind tomcat7 to loopback i/f only

Planet Debian - Thu, 2014-10-30 10:17

We already edit /etc/tomcat7/server.xml after installing the tomcat7 Debian package, to get it to talk AJP instead of HTTP (so we can use libapache2-mod-jk to put it behind an Apache 2 httpd, which also terminates SSL):

We already comment out the block…

<Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" URIEncoding="UTF-8" redirectPort="8443" />

… and remove the comment chars around the line…

<Connector port="8009" protocol="AJP/1.3" redirectPort="8443" />

… so all we need to do is edit that line to make it look like…

<Connector address="127.0.0.1" port="8009" protocol="AJP/1.3" redirectPort="8443" />

… and we’re all set.

(Your apache2 vhost needs a line

JkMount /?* ajp13_worker

and everything Just Works™ with the default configuration.)

Now, tomcat7 is only accessible from localhost (Legacy IP), and we don’t need to firewall the AJP (or HTTP/8080) port. Do make sure your Apache 2 access configuration works, though ☺

Categories: FLOSS Project Planets

Matt Raible: Devoxx4Kids - Denver Chapter Begins!

Planet Apache - Thu, 2014-10-30 10:11

I'm happy to announce the first meeting of the Denver Chapter of Devoxx4Kids is now open for registration. It's a two hour class titled Introduction to Server-Side Minecraft Programming and will be taught by Denver's own Scott Davis. The class will be held on May 3rd, from 10am - 12pm at Thrive's Cherry Creek location, where we can comfortably fit 20 students. Cost is $10, but you'll get that back in the form of a t-shirt. Age requirement is 9-18 and kids should have basic computer skills (copy/paste, opening applications, etc.). Minecraft experience will certainly help too.

I'd like to thank Daniel De Luca for his initial assistance with getting Devoxx4Kids setup in Denver and Arun Gupta for starting Devoxx4Kids USA. Arun has been a great help in getting things going and answering my questions over the last few months. Of course, none of it would be happening without Scott Davis or Thrive. If you join us on May 3rd, you'll see that Scott is an awesome teacher and Thrive has some incredible facilities.

My initial goal with Devoxx4Kids in Denver is to host a class or two this year. If there's enough demand, we can expand. For now, we're starting small and seeing where it takes us. If you're interested in teaching a future class, please let me know. We'd love to teach the kids a number of skills, from Scratch to NAO Humanoid Robot programming to building things with Arduino and Raspberry Pi.

Categories: FLOSS Project Planets

Hernan Grecco: PyVISA command-line utilities

Planet Python - Thu, 2014-10-30 09:33
PyVISA is a Python frontend for the VISA library that enables controlling all kinds of measurement equipment through GPIB, RS232, USB and Ethernet among others interfaces.

If you are following the development of PyVISA you might have seen that we have recently made the visa module executable to provide a few useful utilities. To try this, you need to update to the latest PyVISA:

$ pip install -U https://github.com/hgrecco/pyvisa/zipball/master

First, we now provide a simpler way to get debug information:

$ python -m visa info
Machine Details:
   Platform ID:    Darwin-10.8.0-x86_64-i386-32bit
   Processor:      i386

Python:
   Implementation: CPython
   Executable:     /Users/grecco/envs/lantz/bin/python
   Version:        3.2.3
   Compiler:       GCC 4.2.1 (Apple Inc. build 5666) (dot 3)
   Bits:           32bit
   Build:          Apr 10 2012 11:25:50 (#v3.2.3:3d0686d90f55)
   Unicode:        UCS2

PyVISA Version: 1.6.1

Backends:
   ni:
      Version: 1.6.1 (bundled with PyVISA)
      #1: /Library/Frameworks/visa.framework/visa:
         found by: auto
         bitness: 32
         Vendor: National Instruments
         Impl. Version: 5243392
         Spec. Version: 5243136
   py:
      Version: 0.1.dev0
      ASRL INSTR: Available via PySerial (10.8.0)
      TCPIP INSTR: Available
      USB INSTR: Available via PyUSB (1.0.0rc1). Backend: libusb0

Notice also that more useful information is given, including details about the different backends (in this case ni and py).

Another utility is the VISA shell which was taken from the Lantz project. It provides a way to list, open and query devices. It also allows you to get (and in the near future set) attributes. The shell has in-built help, autocomplete and

$ python -m visa shell
Welcome to the VISA shell. Type help or ? to list commands.

(visa) list
( 0) ASRL2::INSTR
( 1) ASRL1::INSTR
(visa) open ASRL1::INSTR
ASRL1::INSTR has been opened.
You can talk to the device using "write", "read" or "query".
The default end of message is added to each message
(open) attr
+-----------------------------+------------+----------------------------+-------------------------------------+
|          VISA name          |  Constant  |        Python name         |                 val                 |
+-----------------------------+------------+----------------------------+-------------------------------------+
| VI_ATTR_ASRL_ALLOW_TRANSMIT | 1073676734 |       allow_transmit       |                  1                  |
|    VI_ATTR_ASRL_AVAIL_NUM   | 1073676460 |      bytes_in_buffer       |                  0                  |
|      VI_ATTR_ASRL_BAUD      | 1073676321 |         baud_rate          |                 9600                |
|    VI_ATTR_ASRL_BREAK_LEN   | 1073676733 |        break_length        |                 250                 |
|   VI_ATTR_ASRL_BREAK_STATE  | 1073676732 |        break_state         |                  0                  |
|    VI_ATTR_ASRL_CONNECTED   | 1073676731 |                            |          VI_ERROR_NSUP_ATTR         |
|    VI_ATTR_ASRL_CTS_STATE   | 1073676462 |                            |                  0                  |
|    VI_ATTR_ASRL_DATA_BITS   | 1073676322 |         data_bits          |                  8                  |
|    VI_ATTR_ASRL_DCD_STATE   | 1073676463 |                            |                  0                  |
|  VI_ATTR_ASRL_DISCARD_NULL  | 1073676464 |        discard_null        |                  0                  |
|    VI_ATTR_ASRL_DSR_STATE   | 1073676465 |                            |                  0                  |
|    VI_ATTR_ASRL_DTR_STATE   | 1073676466 |                            |                  1                  |
|     VI_ATTR_ASRL_END_IN     | 1073676467 |         end_input          |                  2                  |
|     VI_ATTR_ASRL_END_OUT    | 1073676468 |                            |                  0                  |
|   VI_ATTR_ASRL_FLOW_CNTRL   | 1073676325 |                            |                  0                  |
|     VI_ATTR_ASRL_PARITY     | 1073676323 |           parity           |                  0                  |
|  VI_ATTR_ASRL_REPLACE_CHAR  | 1073676478 |        replace_char        |                  0                  |
|    VI_ATTR_ASRL_RI_STATE    | 1073676479 |                            |                  0                  |
|    VI_ATTR_ASRL_RTS_STATE   | 1073676480 |                            |                  1                  |
|    VI_ATTR_ASRL_STOP_BITS   | 1073676324 |         stop_bits          |                  10                 |
|    VI_ATTR_ASRL_WIRE_MODE   | 1073676735 |                            |                 128                 |
|    VI_ATTR_ASRL_XOFF_CHAR   | 1073676482 |         xoff_char          |                  19                 |
|    VI_ATTR_ASRL_XON_CHAR    | 1073676481 |          xon_char          |                  17                 |
|     VI_ATTR_DMA_ALLOW_EN    | 1073676318 |         allow_dma          |                  0                  |
|    VI_ATTR_FILE_APPEND_EN   | 1073676690 |                            |                  0                  |
|    VI_ATTR_INTF_INST_NAME   | 3221160169 |                            | ASRL1  (/dev/cu.Bluetooth-PDA-Sync) |
|       VI_ATTR_INTF_NUM      | 1073676662 |      interface_number      |                  1                  |
|      VI_ATTR_INTF_TYPE      | 1073676657 |                            |                  4                  |
|       VI_ATTR_IO_PROT       | 1073676316 |        io_protocol         |                  1                  |
|   VI_ATTR_MAX_QUEUE_LENGTH  | 1073676293 |                            |                  50                 |
|   VI_ATTR_RD_BUF_OPER_MODE  | 1073676330 |                            |                  3                  |
|     VI_ATTR_RD_BUF_SIZE     | 1073676331 |                            |                 4096                |
|      VI_ATTR_RM_SESSION     | 1073676484 |                            |               3160976               |
|      VI_ATTR_RSRC_CLASS     | 3221159937 |       resource_class       |                INSTR                |
|  VI_ATTR_RSRC_IMPL_VERSION  | 1073676291 |   implementation_version   |               5243392               |
|   VI_ATTR_RSRC_LOCK_STATE   | 1073676292 |         lock_state         |                  0                  |
|     VI_ATTR_RSRC_MANF_ID    | 1073676661 |                            |                 4086                |
|    VI_ATTR_RSRC_MANF_NAME   | 3221160308 | resource_manufacturer_name |         National Instruments        |
|      VI_ATTR_RSRC_NAME      | 3221159938 |       resource_name        |             ASRL1::INSTR            |
|  VI_ATTR_RSRC_SPEC_VERSION  | 1073676656 |        spec_version        |               5243136               |
|     VI_ATTR_SEND_END_EN     | 1073676310 |          send_end          |                  1                  |
|   VI_ATTR_SUPPRESS_END_EN   | 1073676342 |                            |                  0                  |
|       VI_ATTR_TERMCHAR      | 1073676312 |                            |                  10                 |
|     VI_ATTR_TERMCHAR_EN     | 1073676344 |                            |                  0                  |
|      VI_ATTR_TMO_VALUE      | 1073676314 |                            |                 2000                |
|       VI_ATTR_TRIG_ID       | 1073676663 |                            |                  -1                 |
|   VI_ATTR_WR_BUF_OPER_MODE  | 1073676333 |                            |                  2                  |
|     VI_ATTR_WR_BUF_SIZE     | 1073676334 |                            |                 4096                |
+-----------------------------+------------+----------------------------+-------------------------------------+
(open) close
The resource has been closed.


Again, this release is only possible thanks to the contribution of a lot of people that contributed bug reports, testing and code. Thanks to everybody!

Submit your bug reports, comments and suggestions in the Issue Tracker. We will address them promptly.

Read the development docs: https://pyvisa.readthedocs.org/en/master/
or fork the code: https:/https://github.com/hgrecco/pyvisa/
Categories: FLOSS Project Planets

Alessio Treglia: Handling identities in distributed Linux cloud instances

Planet Debian - Thu, 2014-10-30 08:55

I’ve many distributed Linux instances across several clouds, be them global, such as Amazon or Digital Ocean, or regional clouds such as TeutoStack or Enter.

Probably many of you are facing the same issue: having a consistent UNIX identity across all multiple instances. While in an ideal world LDAP would be a perfect choice, letting LDAP open to the wild Internet is not a great idea.

So, how to solve this issue, while being secure? The trick is to use the new NSS module for SecurePass.

While SecurePass has been traditionally used into the operating system just as a two factor authentication, the new beta release is capable of holding “extended attributes”, i.e. arbitrary information for each user profile.

We will use SecurePass to authenticate users and store Unix information with this new capability. In detail, we will:

  • Use PAM to authenticate the user via RADIUS
  • Use the new NSS module for SecurePass to have a consistent UID/GID/….
 SecurePass and extended attributes

The next generation of SecurePass (currently in beta) is capable of storing arbitrary data for each profile. This is called “Extended Attributes” (or xattrs) and -as you can imagine- is organized as key/value pair.

You will need the SecurePass tools to be able to modify users’ extended attributes. The new releases of Debian Jessie and Ubuntu Vivid Vervet have a package for it, just:

# apt-get install securepass-tools

ERRATA CORRIGE: securepass-tools hasn’t been uploaded to Debian yet, Alessio is working hard to make the package available in time for Jessie though.

For other distributions or previous releases, there’s a python package (PIP) available. Make sure that you have pycurl installed and then:

# pip install securepass-tools

While SecurePass tools allow local configuration file, we highly recommend for this tutorial to create a global /etc/securepass.conf, so that it will be useful for the NSS module. The configuration file looks like:

[default] app_id = xxxxx app_secret = xxxx endpoint = https://beta.secure-pass.net/

Where app_id and app_secrets are valid API keys to access SecurePass beta.

Through the command line, we will be able to set UID, GID and all the required Unix attributes for each user:

# sp-user-xattrs user@domain.net set posixuid 1000

While posixuid is the bare minimum attribute to have a Unix login, the following attributes are valid:

  • posixuid → UID of the user
  • posixgid → GID of the user
  • posixhomedir → Home directory
  • posixshell → Desired shell
  • posixgecos → Gecos (defaults to username)
Install and Configure NSS SecurePass

In a similar way to the tools, Debian Jessie and Ubuntu Vivid Vervet have native package for SecurePass:

# apt-get install libnss-securepass

For previous releases of Debian and Ubuntu can still run the NSS module, as well as CentOS and RHEL. Download the sources from:

https://github.com/garlsecurity/nss_securepass

Then:

./configure make make install (Debian/Ubuntu Only)

For CentOS/RHEL/Fedora you will need to copy files in the right place:

/usr/bin/install -c -o root -g root libnss_sp.so.2 /usr/lib64/libnss_sp.so.2 ln -sf libnss_sp.so.2 /usr/lib64/libnss_sp.so

The /etc/securepass.conf configuration file should be extended to hold defaults for NSS by creating an [nss] section as follows:

[nss] realm = company.net default_gid = 100 default_home = "/home" default_shell = "/bin/bash"

This will create defaults in case values other than posixuid are not being used. We need to configure the Name Service Switch (NSS) to use SecurePass. We will change the /etc/nsswitch.conf by adding “sp” to the passwd entry as follows:

$ grep sp /etc/nsswitch.conf passwd:     files sp

Double check that NSS is picking up our new SecurePass configuration by querying the passwd entries as follows:

$ getent passwd user user:x:1000:100:My User:/home/user:/bin/bash $ id user uid=1000(user)  gid=100(users) groups=100(users)

Using this setup by itself wouldn’t allow users to login to a system because the password is missing. We will use SecurePass’ authentication to access the remote machine.

Configure PAM for SecurePass

On Debian/Ubuntu, install the RADIUS PAM module with:

# apt-get install libpam-radius-auth

If you are using CentOS or RHEL, you need to have the EPEL repository configured. In order to activate EPEL, follow the instructions on http://fedoraproject.org/wiki/EPEL

Be aware that this has not being tested with SE-Linux enabled (check off or permissive).

On CentOS/RHEL, install the RADIUS PAM module with:

# yum -y install pam_radius

Note: as per the time of writing, EPEL 7 is still in beta and does not contain the Radius PAM module. A request has been filed through RedHat’s Bugzilla to include this package also in EPEL 7

Configure SecurePass with your RADIUS device. We only need to set the public IP Address of the server, a fully qualified domain name (FQDN), and the secret password for the radius authentication. In case of the server being under NAT, specify the public IP address that will be translated into it. After completion we get a small recap of the already created device. For the sake of example, we use “secret” as our secret password.

Configure the RADIUS PAM module accordingly, i.e. open /etc/pam_radius.conf and add the following lines:

radius1.secure-pass.net secret 3 radius2.secure-pass.net secret 3

Of course the “secret” is the same we have set up on the SecurePass administration interface. Beyond this point we need to configure the PAM to correct manage the authentication.

In CentOS, open the configuration file /etc/pam.d/password-auth-ac; in Debian/Ubuntu open the /etc/pam.d/common-auth configuration and make sure that pam_radius_auth.so is in the list.

auth required pam_env.so auth sufficient pam_radius_auth.so try_first_pass auth sufficient pam_unix.so nullok try_first_pass auth requisite pam_succeed_if.so uid >= 500 quiet auth required pam_deny.so Conclusions

Handling many distributed Linux poses several challenges, from software updates to identity management and central logging.  In a cloud scenario, it is not always applicable to use traditional enterprise solutions, but new tools might become very handy.

To freely subscribe to securepass beta, join SecurePass on: http://www.secure-pass.net/open
And then send an e-mail to info@garl.ch requesting beta access.

Categories: FLOSS Project Planets

Sergey Beryozkin: JSR-370: Even Better JAX-RS on the way

Planet Apache - Thu, 2014-10-30 06:29
No doubt JAX-RS 2.0 (JSR-339) has been, is and will be a success - a lot has been written  about the top features JAX-RS 2.0 offers. It is still very much a relevant story for many developers who have their REST services being migrated to JAX-RS 2.0, it is not always easy for a given production to switch to a new specification's API fast.

But JAX-RS 2.0 is not the end of JAX-RS as such. So the fact JSR-370 (JAX-RS 2.1) is now active is a very good news for all of us working with or interested in JAX-RS.
Have a look at the "Request" section and check the list of the improvements and new features that the specification will cover. Good stuff. Note the effort will be made to have JAX-RS applications much better prepared for supporting Web-based UI frontends. Another thing to note is the fact it will be Java 8 based so expect Java 8 features making themselves visible in JAX-RS 2.1 API, Marek and Santiago will come up with some very cool API ideas.

All is great in the JAX-RS space. Explore it and enjoy !

Categories: FLOSS Project Planets

Reinout van Rees: Ubuntu PPA madness

Planet Python - Thu, 2014-10-30 06:10

I'm going flipping insane. In ye olde days, when I was programming with the python CMS Plone, my dependencies were limited to python and PIL. Perhaps lxml. LXML was a pain to install sometimes, but there were ways around it.

Working on OSX was no problem. Server setup? Ubuntu. The only thing you really had to watch in those days was your python version. Does this old site still depends on python 2.4 or is it fine to use 2.6? Plone had its own Zope database, so you didn't even need database bindings.

Now I'm working on Django sites. No problem with Django, btw! But... the sites we build with it are pretty elaborate geographical websites with lots of dependencies. Mapnik, matplotlib, numpy, scipy, gdal, spatialite, postgis. And that's not the full list. So developing on OSX is no fun anymore, using a virtual machine (virtualbox or vmware) is a necessity. So: Ubuntu.

But... ubuntu 12.04, which we still use on most of the servers, has too-old versions of several of those packages. We need a newer gdal, for instance. And a newer spatialite. The common solution is to use a PPA for that, like ubuntugis-stable.

Now for some random things that can go wrong:

  • We absolutely need a newer gdal, so we add the ubuntugis-stable PPA. This has nice new versions for lots of geo-related packages, for instance the "proj" projection library.

  • It doesn't include the "python-pyproj" package, though, which means that the ubuntu-installed python-pyproj package is compiled against a different proj library. Which means your django site segfaults. Digging deep with strace was needed to discover the problem.

  • Of course, if you need that latest gdal for your site, you add the PPA to the server. Everything runs fine.

  • A month later, the server has to be rebooted. Now the three other sites on that same server fail to start due to the pyproj-segfault. Nobody bothered to check the other sites on the server, of course. (This happened three times on different servers. This is the sort of stuff that makes you cast a doubtful eye on our quite liberal "sudo" policy...)

  • Pinning pyproj to 1.9.3 helped, as 1.9.3 worked around the issue by bundling the proj library instead of relying on the OS-packaged one.

  • Ubuntugis-stable sounds stable, but they're of course focused on getting the latest geo packages into ubuntu. So they switched from gdal 1.9 to 1.10 somewhere around june. So /usr/lib/libgdal1.so became /usr/lib/libgdal1h.so and suddenly "apt-get update/upgrade" took down many sites.

    See this travis-ci issue for some background.

  • The solution for this PPA problem was another PPA: the postgres one. That includes gdal 1.9 instead of the too-new 1.10.

  • Possible problem: the postgres PPA also uses 1.9 on the new ubuntu 14.04. 14.04 contains gdal 1.10, so using the postgres PPA downgrades gdal. That cannot but break a lot of things for us.

  • I just discovered a site that couldn't possibly work. It needs the ubuntugis-stable PPA as it needs a recent spatialite. But it also needs the postgres PPA for the 1.9 gdal! And those two don't match.

  • It still works, though. I'm not totally sure why. On a compilation machine where we build a custom debian package for one of the components, the postgres PPA was installed manually outside of the automatic build scripts. And a jenkins server where we test it still has the ubuntugis PPA, but somehow it still has the old 1.9 gdal. Probably someone pinned it?

  • Another reason is probably that one of the components was compiled before the 1.9/1.10 gdal change and didn't need re-compilation yet. Once that must be done we're probably in deep shit.

  • If I look at some ansible scripts that are used to set up some of our servers, I see the ubuntugis PPA, the mapnik/v2.2.0 PPA and the redis PPA. Oh, how can that ever work? The software on those servers needs the 1.9 gdal, right?

  • I asked a colleague. Apparently the servers were all created before june and they haven't done an "apt-get upgrade" since. That's why they still work.

Personally, I think the best way forward is to use ubuntu 14.04 LTS with its recent versions. And to stick to the base ubuntu as much as possible. And if one or two packages are needed in more recent versions, try to somehow make a custom package for it without breaking the rest. I did something like that for mapnik, where we somehow needed the ancient 0.7 version on some servers.

If a PPA equates to never being able to do "apt-get update", I don't really think it is the best way forward for servers that really have to stay up.

Does someone have other thoughts? Other solutions? And no, I don't think docker containers are the solution as throwing around PPAs doesn't get more stable once you isolate it in a container. You don't break anything else, true, but the container itself can be broken by an update just fine.

Categories: FLOSS Project Planets

Four Kitchens: Testing Drupal with CasperJS

Planet Drupal - Thu, 2014-10-30 06:05

In our last post we used CasperJS to rapidly test the user interface of a website. Now we will build on these skills and add a familiar element into the mix: Drupal. Like any framework, Drupal offers many predictable, standard behaviors which we can take advantage of. Using this predictability, we can easily test many behaviors including logged-in activity such as posting content.

Testing JavaScript Drupal
Categories: FLOSS Project Planets

S. Lott: My First Webcast

Planet Python - Thu, 2014-10-30 04:00
http://www.oreilly.com/pub/e/3255

I'm a pretty good public speaker. But I've avoided webcasting and podcasting because it's kind of daunting. In a smaller venue, the audience members are right there, and you can tell if you're not making sense. In a webcast, the feedback will be indirect. In a podcast it's seems like it would be nonexistent.

Also, I find that programming is an intensely literate experience. It's about reading and writing. A podcast -- listening and watching -- seems very un-programmerly to me. Perhaps I'm just being an old "get-of-my-lawn-you-kids" fart.

But I'll see how the webcast thing goes in January, and perhaps I'll try to do some podcasts.

Categories: FLOSS Project Planets

Keith Packard: Glamor cleanup

Planet Debian - Thu, 2014-10-30 03:51
Glamor Cleanup

Before I start really digging in to reworking the Render support in Glamor, I wanted to take a stab at cleaning up some cruft which has accumulated in Glamor over the years. Here's what I've done so far.

Get rid of the Intel fallback paths

I think it's my fault, and I'm sorry.

The original Intel Glamor code has Glamor implement accelerated operations using GL, and when those fail, the Intel driver would fall back to its existing code, either UXA acceleration or software. Note that it wasn't Glamor doing these fallbacks, instead the Intel driver had a complete wrapper around every rendering API, calling special Glamor entry points which would return FALSE if GL couldn't accelerate the specified operation.

The thinking was that when GL couldn't do something, it would be far faster to take advantage of the existing UXA paths than to have Glamor fall back to pulling the bits out of GL, drawing to temporary images with software, and pushing the bits back to GL.

And, that may well be true, but what we've managed to prove is that there really aren't any interesting rendering paths which GL can't do directly. For core X, the only fallbacks we have today are for operations using a weird planemask, and some CopyPlane operations. For Render, essentially everything can be accelerated with the GPU.

At this point, the old Intel Glamor implementation is a lot of ugly code in Glamor without any use. I posted patches to the Intel driver several months ago which fix the Glamor bits there, but they haven't seen any review yet and so they haven't been merged, although I've been running them since 1.16 was released...

Getting rid of this support let me eliminate all of the _nf functions exported from Glamor, along with the GLAMOR_USE_SCREEN and GLAMOR_USE_PICTURE_SCREEN parameters, along with the GLAMOR_SEPARATE_TEXTURE pixmap type.

Force all pixmaps to have exact allocations

Glamor has a cache of recently used textures that it uses to avoid allocating and de-allocating GL textures rapidly. For pixmaps small enough to fit in a single texture, Glamor would use a cache texture that was larger than the pixmap.

I disabled this when I rewrote the Glamor rendering code for core X; that code used texture repeat modes for tiles and stipples; if the texture wasn't the same size as the pixmap, then texturing would fail.

On the Render side, Glamor would actually reallocate pixmaps used as repeating texture sources. I could have fixed up the core rendering code to use this, but I decided instead to just simplify things and eliminate the ability to use larger textures for pixmaps everywhere.

Remove redundant pixmap and screen private pointers

Every Glamor pixmap private structure had a pointer back to the pixmap it was allocated for, along with a pointer to the the Glamor screen private structure for the related screen. There's no particularly good reason for this, other than making it possible to pass just the Glamor pixmap private around a lot of places. So, I removed those pointers and fixed up the functions to take the necessary extra or replaced parameters.

Similarly, every Glamor fbo had a pointer back to the Glamor screen private too; I removed that and now pass the Glamor screen private parameter as needed.

Reducing pixmap private complexity

Glamor had three separate kinds of pixmap private structures, one for 'normal' pixmaps (those allocated by them selves in a single FBO), one for 'large' pixmaps, where the pixmap was tiled across many FBOs, and a third for 'atlas' pixmaps, which presumably would be a single FBO holding multiple pixmaps.

The 'atlas' form was never actually implemented, so it was pretty easy to get rid of that.

For large vs normal pixmaps, the solution was to move the extra data needed by large pixmaps into the same structure as that used by normal pixmaps and simply initialize those elements correctly in all cases. Now, most code can ignore the difference and simply walk the array of FBOs as necessary.

The other thing I did was to shrink the number of possible pixmap types from 8 down to three. Glamor now exposes just these possible pixmap types:

  • GLAMOR_MEMORY. This is a software-only pixmap, stored in regular memory and only drawn with software. This is used for 1bpp pixmaps, shared memory pixmaps and glyph pixmaps. Most of the time, these pixmaps won't even get a Glamor pixmap private structure allocated, but if you use one of these with the existing Render acceleration code, that will end up wanting a private pointer. I'm hoping to fix the code so we can just use a NULL private to indicate this kind of pixmap.

  • GLAMOR_TEXTURE. This is a full Glamor pixmap, capable of being used via either GL or software fallbacks.

  • GLAMOR_DRM_ONLY. This is a pixmap based on an FBO which was passed from the driver, and for which Glamor couldn't get the underlying DRM object. I think this is an error, but I don't quite understand what's going on here yet...

Future Work
  • Deal with X vs GL color formats
  • Finish my new CompositeGlyphs code
  • Create pure shader-based gradients
  • Rewrite Composite to use the GPU for more computation
  • Take another stab at doing GPU-accelerated trapezoids
Categories: FLOSS Project Planets

Bluespark Labs: Follow the readiness of the top 100 modules for Drupal 8 with our automatically updated tool

Planet Drupal - Thu, 2014-10-30 03:42

With the first Drupal 8 beta having been released at Drupalcon Amsterdam, we thought this would be a good time to a look at the top 100 projects on drupal.org to see just how far along the line the process of preparing for Drupal 8 is. However, given that there's a lot of progress to be made and I don't feel like manually updating a long list of modules, I decided to make a small tool to get the status of these modules and keep the data up to date.

This turned out to be a fun little project, and slightly more involved than I anticipated at first. (Isn't it always the case!) However, at its heart it's a bone-simple Drupal project - one content type for the Drupal projects (and their metadata) we're interested in, and a few views to show them as a table and calculate simple statistics. The work of updating the metadata from drupal.org is handled in 85 lines of code, using hook_cron to add each project to a Queue to be processed. The queue callback borrows code from the update module and simply gets release data, parses it, and updates the metadata on the project nodes. In the end, the most work was doing the research to determine which projects are already in core, and adding notes about where to find D8 upgrade issues and so on.

So, how did it all turn out? Using the current top 100 projects based on the usage statistics on drupal.org, our tool tells us that as of today, out of the 100 most popular projects:

Thanks for reading, and be sure to keep an eye on the status page to see how the most used contrib modules are coming along!

Tags: Drupal PlanetDrupal 8
Categories: FLOSS Project Planets

PreviousNext: Drupal 7.32 critical update: Our Response

Planet Drupal - Thu, 2014-10-30 01:30

With the Drupal Security team's release of a public service announcement, the infamous security update known as 'SA-005' is back in the news. Even though it's old news, we've been fielding a new round of questions, so we thought we'd try to clear up some of the confusion.

Categories: FLOSS Project Planets

Modules Unraveled: 124 Creating Drupal Configuration in Code Using CINC with Scott Reynen - Modules Unraveled Podcast

Planet Drupal - Thu, 2014-10-30 01:00
Published: Thu, 10/30/14Download this episodeCINC
  • What is CINC?
  • How is it different from Features or Configuration Management?
  • Is it something you use on an ongoing basis? Or is it just for the initial site setup?
  • What types of configuration can you manage with CINC?
  • What if you already have a content type created, and you want to add a field to the content type?
    • How does that affect existing content, and new content.
  • What about the reverse? Can you remove a field?
    • What happens to the data that is already in the database?
  • Can you undo configuration that you’ve created with CINC?
  • How do you prevent site admins from disabling the module and deleting their content types?
  • CINC YAML
  • CINC & Features
  • CINC & Drupal 8 Config API
  • cinc.io
  • Sheet2Module
  • How do you see CINC working in a headless Drupal setting?
Use Cases
  • Create dozens of fields quickly.
  • Add a field to a content type after an existing field.
  • Update configuration only if it still matches the default settings.
  • How do you use this in a dev/staging/production
  • Have you noticed any improved feedback, improvements to your workflow while using CINC?
  • If people want to jump in and help development or work on new features what should they do?
Episode Links: Scott on drupal.orgScott on TwitterTags: Configurationplanet-drupal
Categories: FLOSS Project Planets

BlackMesh: Looking at DrupalCon Amsterdam Sprints, Upcoming sprints for you to attend

Planet Drupal - Thu, 2014-10-30 00:00

By:
Tim Erickson, stpaultim, @stpaultim from Triplo
Alina, alimac, @czaroxiejka
Cathy Theys, YesCT, @YesCT from BlackMesh

DrupalCon Amsterdam Sprints

DrupalCon is a great place to enhance your Drupal skills, learn about the latest modules, and improve your theming techniques. Sure, there are sessions, keynotes, vendor displays, and parties... like trivia night!

But.. there is also the opportunity to look behind the curtain and see how the software really gets made. And, more importantly, to lend your hand in making it. For six days, three both before and after DrupalCon, there are dedicated sprint opportunities where you can hang out with other Drupalistas testing, summarizing issues, writing documentation, working on patches, or generally contributing to the development of Drupal and the Drupal community.

We want to share some details about the DrupalCon Amsterdam Sprints (and pictures to reminisce about the good times) and mention some upcoming sprints that you can hopefully attend.

Sprint sponsors

Our sponsors helped us have:

  • Space:
    • Co-working space Saturday and Sunday before the con.
    • Sprint space at the venue Monday-Thursday.
    • Big sprint space Friday.
    • Co-working space Saturday and Sunday after the con.
  • Food and coffee all of the days.
  • Sprint supplies: task cards, stickers, markers, signs, flip charts.
  • Mentor thank you dinner.
Pre-con sprints

During the weekend before DrupalCon 60 people gathered on Saturday and 100 on Sunday at The Berlage, a fantastic old castle just blocks from the central train station. On most days the Berlage serves as co-working space. For 48 hours it was home to contributors working together on Drupal core, contrib projects, distributions and Drupal.org itself. Our supportive sponsors supplied lunch and coffee on both days while contributors worked on a number of initiatives: Multilingual, Drupal 8 criticals and beta blocking issues, Headless Drupal and REST, porting contrib projects to Drupal 8, Drupal 8 Frontend United, Search, Drupal.org, Behat (Behavior Driven and javascript/frontend testing), Commerce, Panopoly, Rules, Media, Documentation, Migration, Performance, Modernizing Testbot, and more.


The outside of the Berlage co-working space (castle) with the Drupal Association banner.
(photo: @gaborhojtsy)


Sprinters sprinting inside the cool looking Berlage.
marthinal, franSeva, estoyausente, YesCT, Ryan Weal
(photo: @gaborhojtsy)

We had lots of rooms for groups to gather at the Berlage.


pwolanin, dawehner, wimleers, Hydra, swentel
(photo: @Schnitzel)


Sutharsan, yched, Berdir
(photo: @Schnitzel)

On Monday sprint attendance grew to 180 sprinters. We moved to the conference venue, Amsterdam RAI. Other pre-conference events taking place included trainings, the Community Summit, and the Business Summit. At this particular DrupalCon there was much excitement about the anticipated beta release of Drupal. Many people did a lot of testing to make sure that the beta would be ready.


Discussing a beta blocker issue they found.
lauriii, sihv, Gábor Hojtsy, lanchez
(photo: @borisbaldinger)


Lots of people sprinting and testing the beta candidate, with support from experienced core contributors walking around and helping.
tstoeckler, mauzeh
(photo: @borisbaldinger)

During the con

Sprinting continued during the conference, Tuesday through Thursday. And, to prepare for Friday's mentored sprint, the core mentoring team scheduled a series of 8 BOFs (‘Birds of a Feather’ or informal sessions). Preparations included mentor orientation, setting up local environments, and reading, updating, and tagging issues in the Drupal issue queue. Mentoring BoFs were open to all conference participants.


Mentor Training
YesCT, sqndr, -, -, lazysoundsystem, neoxavier, Mac_Weber, patrickd, roderik, jmolivas, marcvangend, -, realityloop, rteijeiro
(photo: stpaultim)

To promote contribution sprints, mentors volunteered at the mentoring booth in the exhibition hall during all three days of DrupalCon. Conference attendees who visited the booth learned about the Friday sprints. Mentors also recruited additional mentors, and encouraged everyone to get involved in contributing to Drupal.


The mentor booth with lots of signage, and welcoming people.
mradcliffe, kgoel
(photo: stpaultim )

At the booth, conference attendees were able to pick up our new contributor role task cards and stickers which outlined some of the various ways that people can contribute to Drupal and provided them with a sticker as recognition for the specific roles that they already play.


Task cards and stickers
(photo: @HornCologne)

Mentored Sprint

In Amsterdam, 450 people showed up to contribute to Drupal on Friday.


(photo: _SteffenR)

People gathered in groups to work on issues together.


-, -, -, -, -
(photo: @peterlozano)

For many people the highlight of the week is the large “mentored” sprint on Friday. 180 of the 450 participated in our First-time sprinter workshop designed to help Drupal users and developers better understand the community, the issue queues, and contribution. The workshop helped people install the tools they would use as contributors. Another 100 were ready to start work right away with our 50 mentors. Throughout the day people from the first-time sprinter workshop transitioned to contributing with other sprinters and mentors. Sprinters and mentors helped people identify issues that had tasks that aligned with their specific skills and experience.


The workshop room.
(photo: stpaultim)


Mentors (in orange shirts): rachel_norfolk, roderik
(photo: stpaultim)


Hand written signs were everywhere!
(photo: stpaultim)


A group picture of some of the mentors.
mradcliffe, Aimee Degnan, alimac, kgoel, rteijero, Deciphered, emma.maria, mon_franco, patrickd, 8thom, -, lauriii, marcvangend, ceng, Ryan Weal, YesCT, realityloop, -, lazysoundsystem, roderik, Xano, David Hernández, -, -, -, -
(photo: @Crell)

Near the end of the day, over 100 sprinters (both beginners and veterans) gathered to watch the work of first time contributors get committed (added) to Drupal core. Angie Byron (webchick) walked the audience through the process of evaluating, testing, and then committing a patch to Drupal core.


Live commit by webchick
webchick, -, -, marcvangend
(photo: Pedro Lozano)

Extended sprints on Saturday and Sunday

On Saturday after DrupalCon 80 dedicated contributors moved back to the Berlage to continue the work on Drupal core. 60 people came to contribute on Sunday. During these final days of extended sprints, Drupal beginners and newcomers had the chance to exercise their newly acquired skills while working together with some of the smartest and most experienced Drupal contributors in the world. The value of the skills exchanges and personal relationships that come from working in this kind of environment is cannot be underestimated. While there is an abundance of activity during Friday’s DrupalCon contribution sprints, the atmosphere during extended sprints is a bit more relaxed. Attending the pre and post-con sprints gives sprinters time to dive deep into issues and tie up loose ends. After a number of hallway and after-session conversations, contributors working on specific Drupal 8 initiatives meet to sketch out ideas, use whiteboards or any means of note-taking to make plans for the future.


LoMo, Outi, pfrenssen, lauriii, mortendk, emma.maria, lewisnyman
(photo: stpaultim)


Aimee Degnan, Schnitzel, dixon, -, Xano, alimac, boris, Gábor Hojtsy, realityloop, YesCT, justafish, eatings, fgm, penyaskito, pcambra, -
(photo: stpaultim)


-, jthorson, opdavies, drumm, RuthieF, -, -, killes, dasrecht
(photo: stpaultim)

Feedback about the sprints

Please contact me to get your DrupalCon Amsterdam sprint related blog added to the list here.

Upcoming sprints

Plan your travel for the next event so you can sprint with us too!

Corrections

If there are corrections, for example of names of people in the pictures, please let me know. -Cathy, @YesCT, or Drupal.org contact form.

DrupalSprintsDrupalConDrupal Planet
Categories: FLOSS Project Planets

Calvin Spealman: The Curl Pipe

Planet Python - Wed, 2014-10-29 22:56
If anything deserves to be called an anti-pattern it is probably the common and worry-inducing practice of documenting your installation process by asking asking users to copy and paste a line into their shell that will snag some file off the internet and pipe its contents directly into your shell to execute.
Sometimes this is even done as root.
This is something known to be awful, but which remains a cornerstone via its use by some of the most important tools in our belts. Homebrew does it. NPM does it, too. And some projects look better, but are they? Pip asks you to download get-pip.py and run it to install, which isn’t practically any different than piping from curl, just less efficient.
But worst of all, we might as well be doing this even more often, because our most depended about tooling is all just as guilty even without doing the curl pipe sh dance. What do you think happens when you pip install your favorite Python package, anyway? Pip downloads a file from the internet and executes it. Simple as that, for the purposes here. Sure, these days we have saner defaults. It has to be HTTPS and it has to be from PyPI by default, but its not like these packages are screened.
For all our concerns about security and frets over SHELLSHOCK and POODLE vulnerabilities, doesn’t it seem like the developer community does an awful lot of executing random files off the internet?
Categories: FLOSS Project Planets

Matthew Garrett: On joining the FSF board

Planet Debian - Wed, 2014-10-29 20:45
I joined the board of directors of the Free Software Foundation a couple of weeks ago. I've been travelling a bunch since then, so haven't really had time to write about it. But since I'm currently waiting for a test job to finish, why not?

It's impossible to overstate how important free software is. A movement that began with a quest to work around a faulty printer is now our greatest defence against a world full of hostile actors. Without the ability to examine software, we can have no real faith that we haven't been put at risk by backdoors introduced through incompetence or malice. Without the freedom to modify software, we have no chance of updating it to deal with the new challenges that we face on a daily basis. Without the freedom to pass that modified software on to others, we are unable to help people who don't have the technical skills to protect themselves.

Free software isn't sufficient for building a trustworthy computing environment, one that not merely protects the user but respects the user. But it is necessary for that, and that's why I continue to evangelise on its behalf at every opportunity.

However.

Free software has a problem. It's natural to write software to satisfy our own needs, but in doing so we write software that doesn't provide as much benefit to people who have different needs. We need to listen to others, improve our knowledge of their requirements and ensure that they are in a position to benefit from the freedoms we espouse. And that means building diverse communities, communities that are inclusive regardless of people's race, gender, sexuality or economic background. Free software that ends up designed primarily to meet the needs of well-off white men is a failure. We do not improve the world by ignoring the majority of people in it. To do that, we need to listen to others. And to do that, we need to ensure that our community is accessible to everybody.

That's not the case right now. We are a community that is disproportionately male, disproportionately white, disproportionately rich. This is made strikingly obvious by looking at the composition of the FSF board, a body made up entirely of white men. In joining the board, I have perpetuated this. I do not bring new experiences. I do not bring an understanding of an entirely different set of problems. I do not serve as an inspiration to groups currently under-represented in our communities. I am, in short, a hypocrite.

So why did I do it? Why have I joined an organisation whose founder I publicly criticised for making sexist jokes in a conference presentation? I'm afraid that my answer may not seem convincing, but in the end it boils down to feeling that I can make more of a difference from within than from outside. I am now in a position to ensure that the board never forgets to consider diversity when making decisions. I am in a position to advocate for programs that build us stronger, more representative communities. I am in a position to take responsibility for our failings and try to do better in future.

People can justifiably conclude that I'm making excuses, and I can make no argument against that other than to be asked to be judged by my actions. I hope to be able to look back at my time with the FSF and believe that I helped make a positive difference. But maybe this is hubris. Maybe I am just perpetuating the status quo. If so, I absolutely deserve criticism for my choices. We'll find out in a few years.

comments
Categories: FLOSS Project Planets

Armin Ronacher: Don't Panic! The Hitchhiker's Guide to Unwinding

Planet Python - Wed, 2014-10-29 20:00

Rust has an awesome developer community but sometimes emotions can cloud the discussions that are taking place. One of the more interesting discussions (or should I say flamewars) evolve around the concept of stack unwinding in Rust. I consider myself very strongly on one side of this topic but I have not been aware of how hot this topic is until I accidentally tweeted by preference. Since then I spent a bit of time reading up no the issue and figured I might write about it since it is quite an interesting topic and has huge implications on how the language works.

What is this About?

As I wrote last time, there are two different error handling models in Rust these days. In this blog post I will call them result carriers and panics.

A result carrier is a type that can carry either a success value or a failure value. In Rust there are currently two very strong ones and a weak one: the strong ones are Result<T, E> which carries a T result value or an E error value and the Option<T> value which either carries a T result value or None which indicates that no value exists. By convention there is also a weak one which is bool which generally indicates success by signalling true and failure by signalling false. There is a proposal to actually formalize the carrier concept by introducing a Carrier trait that can (within reason) convert between any of those types which would aid composability.

The second way to indicate failure is a panic. Unlike value carriers which are passed through the stack explicitly in the form of return values, panics fly through the stack until they arrive at the frame of the task in which case they will terminate it. Panics are for all intents and purposes task failures. The way the work is by unwinding the stack slice, invoking cleanup code at each level and finally terminate the task. Panics are intended for situations where the runtime runs out of choices about how to deal with this failure.

Why the Panic?

Currently there is definitely a case where there are too many calls in Rust that will just panic. For me one of the prime examples of something that panics in a not very nice way is the default print function. In fact, your rust Hello World example can panic if invoked the wrong way:

$ ./hello Hello World! $ ./hello 1< /dev/null task '<main>' panicked at 'failed printing to stdout: Bad file descriptor'

The "task panicked" message a task responding to a panic. It immediately stops doing what it does and prints an error message to stderr. It's a very prevalent problem unfortunately with the APIs currently as people do not want to deal with explicit error handling through value carriers and as such use the APIs that just fail the task (like println). That all the tutorials in Rust also go down this road because it's easier to read is not exactly helping.

One of my favorite examples is that the rustc compiler's pretty printing will cause an internal compiler error when piped into less and less is closed with the q key because the pipe is shutting down:

$ rustc a-long-example.rs --pretty=expanded|less error: internal compiler error: unexpected failure note: the compiler hit an unexpected failure path. this is a bug. note: we would appreciate a bug report: http://doc.rust-lang.org/complement-bugreport.html note: run with `RUST_BACKTRACE=1` for a backtrace task '<main>' panicked at 'failed printing to stdout: broken pipe (Broken pipe)'

The answer to why the panic is that computers are hard and many things can fail. In C for instance printf returns an integer which can indicate if the command failed. Did you ever check it? In Rust the policy is to not let failure go through silently and because nobody feels like handling failures of every single print statement, that panicking behavior is in place for many common APIs.

But let's assume those APIs would not panic but require explicit error handling, why do we even need panics? Primarily the problem comes up in situations where a programming error happened that could not have been detected at compile time or the environment in which the application is executing is not providing the assumptions the application makes. Some good examples for the former are out of bound access in arrays and an example for the latter are out of memory errors.

Rust has safe and unsafe access to array members but the vast majority of array access goes through the unsafe access. Unsafe in this case does not mean that you get garbage back, but it means that the runtime will panic and terminate the task. Everything is still safe and everything but you just killed your thread of execution.

For memory errors and things of that nature it's more tricky. malloc in C returns you a null pointer when it fails to allocate. It looks a bit obvious that if you just inherit this behavior you don't need to panic. What that would allow you to do is to run a bit longer after you ran out of memory but there is very little you can actually do from this point onwards. The reason for this is that you just ran out of memory and you are at risk that any further thing you are going to do in order to recover from it, is going to run into the same issue. This is especially a problem if your error representation in itself requires memory. This is hardly a problem that is unique to Rust. Python for instance when it boots up needs to preallocate a MemoryError so that if it ever runs out of memory has an error it can use to indicate the failure as it might be impossible at that point to actually allocate enough memory to represent the out of memory failure.

You would be limited to only calling things that do not allocate anything which might be close to impossible to do. For instance there is no guarantee that just printing a message to stdout does not require an internal allocation.

What's Unwinding?

Stack unwinding is what makes panics in Rust work. To understand how it works you need to understand that Rust sticks very close to the metal and as such stack unwinding requires an agreed upon protocol to work.

When you raise an exception you need to immediately bubble up stack frame by stack frame until you hit your exception handler. In case of Rust you will hit the code that shuts down the task as you cannot setup handlers yourself. However as you blaze through the stack frames, Rust needs to execute all necessary cleanup code on each level so that no memory or resources leak.

This unwinding protocol is highly related to the calling conventions and not at all standardized. One of the big problems with stack unwinding is that it's not exactly an operation that comes natural to program execution, at least not on modern processors. When you want to fly through some stack frames you need to figure out what was the previous stack frame. On AMD64 for instance there is no guaranteed way to get a stacktrace at all without implementing DWARF. However stack unwinding does have the assumed benefit that because you are generally not going down the error path, there are less branches to take when a function returns as the calling frame does not have to check for an error result. If an error does occur, stack unwinding automatically jumps to the error branch and otherwise it's not considered.

What's the Problem with Unwinding?

Traditionally I think there are two problems with stack unwinding. The first one is that unlike function calling conventions, stack unwinding is not particularly standardized. This is especially a problem if you try to combine functions from different programing languages together. The most portable ABI is the C ABI and that one does not know anything about stack unwinding. There is some standardization on some operating systems but even then it does not guarantee that it will be used. For instance on Windows there is Structured Exception Handling (SEH) which however is not used by LLVM currently and as such not by Rust.

If the stack unwinding is not standardized between different languages it automatically limits the usefulness. For instance if you want to use a C++ library from another programming language, your best bet is actually to expose a C interface for it. This also means that any function you invoke through the C wrapper needs to catch down all exceptions and report them through an alternative mechanism out, making it more complicated for everybody. This even causes quite a bit of pain in the absence of actually going through a programming language boundary. If you ever used the PPL libraries (a framework for asynchronous task handling and parallelism) on Windows you might have seen how it internally catches down exceptions and reconstructs them in other places to make them travel between threads safely.

The second problem with stack unwinding is that it's really complex. In order to unwind a stack you need to figure out what your parent frame actually is. This is not necessarily a simple thing to do. On AMD64 for instance there is not enough information available on the stack to find higher stack frames so your only option is to implement the very complex DWARF spec or change the calling conventions so that you do have enough meta information on the stack. This might be simple for a project that has full control of all dependencies, but the moment you call into a library you did not compile, this no longer works.

It's no surprise that stack unwinding traditionally is one of the worse supported features in programming languages. It's not unheard of that a compiler does not implement exceptions for C++ and the reason for this is that stack unwinding is a complex thing. Even if they do implement it, very often exceptions are just made to work but not made to be fast.

Exceptions in a Systems Language

You don't have to be a kernel developer to not be a fan of stack unwinding. Any person that wants to develop a shared library that is used by other people will sooner or later have to think about how to prevent things from throwing exceptions. In C++ it's not hard to actually wrap all exported functions in huge try / catch blocks that will just catch down everything and report a failure code out, but in Rust it's currently actually a bit more complex.

The reason for this is that in Rust you cannot actually handle exceptions. When a function panics it terminates the task. This implies that there needs to be task in the first place that can isolate the exception or you cause issues for your users. Because tasks furthermore are actually threads the cost of encapsulating every function call in a thread does not sound very appealing.

Today you already are in the situation in Rust that if you write a library that wants to export a C ABI and is used by other people you can already not call into functions that panic unless you are in the situation where your system is generally running a thread and you dispatch messages into it.

Panicking Less and Disabling Unwinding

I wish I personally have for the language is that you can write code that is guaranteed to not panic unless it really ends up in a situation where it has no other choice. The biggest areas of concern there are traditionally memory allocations. However in the vast majority of situations failure from memory allocation is actually not something you need to be concerned with. Modern operating systems make it quite hard to end up in a situation where an allocation fails. There is virtual memory management and swapping and OOM killers. An malloc that returns null in a real world situation, other than by passing an unrealistically large size, is quite uncommon. And on embedded systems or similar situations you usually already keep an eye on if you are within your budget and you just avoid ever hitting the limit. This allocation problem is also a lot smaller if you are you a specialized context where you just avoid generic containers that allocate memory on regular operations.

Once panics are unlikely to happen, it's an option to disable the support for unwinding and to just abort the application if a panic ever happens. While this sounds pretty terrible, this is actually the right thing to do for a wide range of environments.

The best way to isolate failures is on the operating system level through separate processes. This sounds worse than it actually is for two reasons: the first is that the operating system provides good support for shipping data between processes. Especially for server applications the ability to have a watchdog processes that runs very little critical code, opens sockets and passes the file descriptors into worker processes is a very convincing concept. If you do end up crashing the worker no request is lost other than the currently handled one if it's single threaded. And if it's multi threaded you might kill a few more requests but new, incoming requests are completely oblivious that a failure happened as they will queue up in the socket held by the watchdog. This is something that systemd and launchd for instance provide out of the box.

In Rust especially a process boundary is a lot less scary than in other programming languages because the language design strongly discourages global state and encourages message passing.

Less Panic Needs Better APIs

The bigger problem than making panic a fatal thing and removing unwinding, is actually providing good APIs that make this less important. The biggest problem with coming up with replacements for panics is that any stack frame needs to deal with failure explicitly. If you end up writing a function that only ever returned a true or false for indicating success or failure, but you now need to call into something that might fail with important and valuable error information you do not have a channel to pass that information out without changing your own function's interface.

The other problem is that nobody wants to deal with failure if they can avoid doing so. The print example is a good one because it's the type of application where people really do not want to deal with it. "What can go wrong with printing". Unfortunately a lot. There are some proposals for Rust about how error propagation and handling can be made nicer but we're quite far from this reality.

Until we arrive there, I don't think disabling of stack unwinding would be a good idea. On the long run however I hope it's a goal because it would make Rust both more portable and interesting as a language to write reusable libraries in.

Categories: FLOSS Project Planets

Justin Mason: Links for 2014-10-29

Planet Apache - Wed, 2014-10-29 19:58
Categories: FLOSS Project Planets

Brett Cannon: Bringing someone into the modern age of technology

Planet Python - Wed, 2014-10-29 19:37

A relative just visited whose current technology consists of a Windows computer and a flip-phone. It was one of those situations where someone was coming to me as a blank slate for a technology upgrade! So I seized on the moment and made some recommendations.

For a phone I said he should get either a phone from Motorola, Google, or Apple. It would come down to price and who was going to provide technical support as to exactly which phone they should get.

For a computer we actually gave the relative an old Chromebook. With no baggage as to some set of programs they had to have access to, it made the decision easy. Even if we didn’t have a spare Chromebook to give away I would have suggested a Chromebook for its price and ease of maintenance. This probably would have also led to a suggestion of a new printer that supported Google Cloud Print to work with the Chromebook. And then the final perk was the integration with Google Drive as a way to move them into cloud backup.

For watching movies I would suggest getting a Netflix account and either a Chromecast or Roku box depending on their comfort level. I personally prefer Chromecast for its flexibility but I realize some people just prefer having a dedicated remote control.

Finally, we said to consider a music streaming service and Sonos. For ease of setup Sonos is great when someone doesn’t have pre-existing A/V equipment that they need to work with. And a streaming service like Google Play Music All-Access gives quick access to digital music to a level people who are upgrading their technological lives are not used to.

Categories: FLOSS Project Planets

Gizra.com: RESTful Discovery - Who knows about your API?

Planet Drupal - Wed, 2014-10-29 18:00

As extremely pedantic developers we take documenting our APIs very seriously. It's not rare to see a good patch rejected in code review just because the PHPdocs weren't clear enough, or a @param wasn't declared properly.

In fact, I often explain to junior devs that the most important part of a function is its signature, and the PHPdocs. The body of the function is just "implementation details". How it communicates its meaning to the person reading it is the vital part.

But where does this whole pedantic mindset got when we open up our web-services?
I would argue that at least 95% of the developers who expose their web-service simply enable RESTws without any modifications. And here's what a developer implementing your web-service will see when visiting /node.json:

Continue reading…

Categories: FLOSS Project Planets
Syndicate content