FLOSS Project Planets

Tag1 Consulting: yumrepos Puppet Module

Planet Drupal - Mon, 2014-12-15 15:12

Earlier this year we undertook a project to upgrade a client's infrastructure to all new servers including a migration from old Puppet scripts which were starting to show their age after many years of server and service changes. During this process, we created a new set of Puppet scripts using Hiera to separate configuration data from modules.

read more

Categories: FLOSS Project Planets

Enthought: Plotting in Excel with PyXLL and Matplotlib

Planet Python - Mon, 2014-12-15 14:27

Author: Tony Roberts, creator of PyXLL, a Python library that makes it possible to write add-ins for Microsoft Excel in Python. Download a FREE 30 day trial of PyXLL here.

Python has a broad range of tools for data analysis and visualization. While Excel is able to produce various types of plots, sometimes it’s either not quite good enough or it’s just preferable to use matplotlib.

Users already familiar with matplotlib will be aware that when showing a plot as part of a Python script the script stops while a plot is shown and continues once the user has closed it. When doing the same in an IPython console when a plot is shown control returns to the IPython prompt immediately, which is useful for interactive development.

Something that has been asked a couple of times is how to use matplotlib within Excel using PyXLL. As matplotlib is just a Python package like any other it can be imported and used in the same way as from any Python script. The difficulty is that when showing a plot the call to matplotlib blocks and so control isn’t returned to Excel until the user closes the window.

This blog shows how to plot data from Excel using matplotlib and PyXLL so that Excel can continue to be used while a plot window is active, and so that same window can be updated whenever the data in Excel is updated.

Basic plotting

Matplotlib can plot just about anything you can imagine! For this blog I’ll be using only a very simple plot to illustrate how it can be done in Excel. There are examples of hundreds of other types of plots on the matplotlib website that can all be used in exactly the same way as this example in Excel.

To start off we’ll write a simple function that takes two columns of data (our x and y values), calculates the exponentially weighted moving average (EWMA) of the y values, and then plot them together as a line plot.

Note that our function could take a pandas dataframe or series quite easily, but just to keep things as simple as possible I’ll stick to plain numpy arrays. To see how to use pandas datatypes with PyXLL see the pandas examples on github: https://github.com/pyxll/pyxll-examples/tree/master/pandas.

from pyxll import xl_func from pandas.stats.moments import ewma import matplotlib.pyplot as plt @xl_func("numpy_column<float> xs, " "numpy_column<float> ys, " "int span: string") def mpl_plot_ewma(xs, ys, span): # calculate the moving average ewma_ys = ewma(ys, span=span) # plot the data plt.plot(xs, ys, alpha=0.4, label="Raw") plt.plot(xs, ewma_ys, label="EWMA") plt.legend() # show the plot plt.show() return "Done!"

To add this code to Excel save it to a Python file and add it to the pyxll.cfg file (see for details).

Calling this function from Excel brings up a matplotlib window with the expected plot. However, Excel won’t respond to any user input until after the window is closed as the plt.show() call blocks until the window is closed.

The unsmoothed data is generated with the Excel formula =SIN(B9)+SIN(B9*10)/3+SIN(B9*100)/7. This could just as easily be data retrieved from a database or the output from another calculation.

Non-blocking plotting

Matplotlib has several backends which enables it to be used with different UI toolkits.

Qt is a popular UI toolkit with Python bindings, one of which is PySide. Matplotlib supports this as a backend, and we can use it to show plots in Excel without using the blocking call plt.show(). This means we can show the plot and continue to use Excel while the plot window is open.

In order to make a Qt application work inside Excel it needs to be polled periodically from the main windows loop. This means it will respond to user inputs without blocking the Excel process, or stopping Excel from receiving user input. Using the windows ‘timer’ module is an easy way to do this. Using the timer module has the advantage that it keeps all the UI code in the same thread as Excel’s main window loop, which keeps things simple.

from PySide import QtCore, QtGui import timer def get_qt_app(): """ returns the global QtGui.QApplication instance and starts the event loop if necessary. """ app = QtCore.QCoreApplication.instance() if app is None: # create a new application app = QtGui.QApplication([]) # use timer to process events periodically processing_events = {} def qt_timer_callback(timer_id, time): if timer_id in processing_events: return processing_events[timer_id] = True try: app = QtCore.QCoreApplication.instance() if app is not None: app.processEvents(QtCore.QEventLoop.AllEvents, 300) finally: del processing_events[timer_id] timer.set_timer(100, qt_timer_callback) return app

This can be used to embed any Qt windows and dialogs in Excel, not just matplotlib windows.

Now all that’s left is to update the plotting function to plot to a Qt window instead of using pyplot.show(). Also we can give each plot a name so that when the data in Excel changes and our plotting function gets called again it re-plots to the same window instead of creating a new one each time.

from matplotlib.figure import Figure from matplotlib.backends.backend_qt4agg import FigureCanvasQTAgg as FigureCanvas from matplotlib.backends.backend_qt4agg import NavigationToolbar2QT as NavigationToolbar # dict to keep track of any plot windows _plot_windows = {} @xl_func("string figname, " "numpy_column<float> xs, " "numpy_column<float> ys, " "int span: string") def mpl_plot_ewma(figname, xs, ys, span): """ Show a matplotlib line plot of xs vs ys and ewma(ys, span) in an interactive window. :param figname: name to use for this plot's window :param xs: list of x values as a column :param ys: list of y values as a column :param span: ewma span """ # create the figure and axes for the plot fig = Figure(figsize=(600, 600), dpi=72, facecolor=(1, 1, 1), edgecolor=(0, 0, 0)) ax = fig.add_subplot(111) # calculate the moving average ewma_ys = ewma(ys, span=span) # plot the data ax.plot(xs, ys, alpha=0.4, label="Raw") ax.plot(xs, ewma_ys, label="EWMA") ax.legend() # Get the Qt app. # Note: no need to 'exec' this as it will be polled in the main windows loop. app = get_qt_app() # generate the canvas to display the plot canvas = FigureCanvas(fig) # Get or create the Qt windows to show the chart in. if figname in _plot_windows: # get the existing window from the global dict and # clear any previous widgets window = _plot_windows[figname] layout = window.layout() if layout: for i in reversed(range(layout.count())): layout.itemAt(i).widget().setParent(None) else: # create a new window for this plot and store it for next time window = QtGui.QWidget() window.resize(800, 600) window.setWindowTitle(figname) _plot_windows[figname] = window # create the navigation toolbar toolbar = NavigationToolbar(canvas, window) # add the canvas and toolbar to the window layout = window.layout() or QtGui.QVBoxLayout() layout.addWidget(canvas) layout.addWidget(toolbar) window.setLayout(layout) # showing the window won't block window.show() return "[Plotted '%s']" % figname

When the function’s called it brings up the plot in a new window and control returns immediately to Excel. The plot window can be interacted with and Excel still responds to user input in the usual way.

When the data in the spreadsheet changes the plot function is called again and it redraws the plot in the same window.

Next steps

The code above could be refined and the code for creating, fetching and clearing the windows could be refactored into some reusable utility code. It was presented in a single function for clarity.

Plotting to a separate window from Excel is sometimes useful, especially as the interactive controls can be used and may be incorporated into other Qt dialogs. However, sometimes it’s nicer to be able to present a graph in Excel as a control in the Excel grid in the same way the native Excel charts work. This is possible using PyXLL and matplotlib and will be the subject of the next blog!

All the code from this blog is available on github https://github.com/pyxll/pyxll-examples/tree/master/matplotlib.

Additional Resources:

Categories: FLOSS Project Planets

Holger Levsen: 20121214-not-everybody-is-equal

Planet Debian - Mon, 2014-12-15 14:25
We ain't equal in Debian neither and wishful thinking won't help.

"White people think calling them white is racist." - "White people think calling them racist is racist."

(Thanks to and via 2damnfeisty and blackgirlsparadise!)

Posted here (in this white male community...) as a food for thought. What else is invisible for whom? Or hardly visible or distorted or whatever shade of (in)visible... - and how can we know about things we cannot (yet) see...

Categories: FLOSS Project Planets

Drupal Association News: Meeting Personas: The Drupal Newcomer

Planet Drupal - Mon, 2014-12-15 14:01

 This post is part of an ongoing series detailing the new personas that have been drawn up as part of our Drupal.org user research.

Bronwen Buswell is a newcomer to Drupal. Based out of Colorado Springs, Colorado, Bronwen works as a Conference and Communications Coordinator at a nonprofit called PEAK Parent Center, which is dedicated to supporting the families of children with disabilities. While Bronwen’s role isn’t technical, she needs to use her company’s website as part of getting her work done.

“We’re federally designated by the US Department of Education, so we try to be a total one-stop shop information and referral center,” Bronwen said. “Families can call us about any situation related to their child, and we will either refer them to the right agency or provide what they need. We’re focused on helping families navigate the education and special education systems, and we serve families with children ages birth through 26, with all sorts of disabilities, including autism, down syndrome, learning disabilities, and so on."

Keeping Up With Technology

In the past few years, PEAK Parent Center’s website became very outdated, and this was a problem. Bronwen’s clients were very dependent on being able to receive assistance over the phone, as many of the resources that the center provides are not readily available online. When updates needed to be made, Bronwen and her company were forced to rely on their tech vendors to make changes to the website, as they were working with a custom solution rather than a CMS.

“Our website was pre-cutting edge, made by local vendors, all in HTML code and SQL database. We had excellent tech vendors who helped us create what we needed, and this was before the CMS options came along so it was really good at first. However, in the past 5 to 6 years, it has gotten really archaic, and we’re super reliant upon our vendors for updating our website. What’s simple in a CMS is complex for us,” Bronwen said.

After doing lots of research and working with the federal government to find the best solution for PEAK Parent Center and other centers like it, Bronwen and her colleagues decided to explore using Drupal to create a site template that could be deployed for PEAK Parent Center  and for other similar centers that it supports across the country.

“We're the technical assistance center for parent centers like ours in a 12 state region,” said Bronwen. “When [Drupal Association Executive Director] Holly Ross was at NTEN we started going to their conferences, which led us to launch a tech leadership initiative where we supported participating parent centers across the nation. As part of that, we got connected with great consultants and thinkers in tech, and we were asked by the US Department of Education to participate in the creation of website templates in 2 content management systems — Wordpress and Drupal — that could be used in other parent centers in the future."

Getting Experienced Assistance

With help from Aaron Pava and Nikki Pava at Alegria Partners, the staff at PEAK Parent Center has been learning to use their new Drupal website. Aaron has advised Bronwen and her colleagues every step of the way, from proposing solutions in the discovery process to walking Bronwen and her coworkers through specific tasks.

Occasionally, Bronwen encounters small problems due to updates or little glitches with distributions, which is why Aaron has encouraged her to get involved and do some training on Drupal. Unfortunately, most of Bronwen’s time is spent trying to get the website ready to launch, as she’s under pressure from the federal government and her board of directors to deploy the new site. Though Bronwen isn’t working on the technical side of the website, she’s busy populating it with content and making sure that it will be a useful tool for her clients.

“What I haven’t done is specific Drupal training,” said Bronwen. “I know about Lynda and Build A Module, but I’ve only had time to do sessions one-on-one with Aaron, for example, ‘Here’s how to upload content in this template.’

"I have learned a lot on Drupal.org, but it’s been primarily through Aaron sending me a link— for example, he’ll send me links about Red Hen since we’re exploring our CRM options— but I haven’t surfed around it much,” Bronwen added.

Areas For Improvement

Bronwen wishes there was a recommended Drupal 101 section on Drupal.org, something that would help content editors like herself learn to use the CMS better, but for now, she is limited to relying on more educated ambassadors for Drupal to point her in the right direction.

“It’s delicate to recommend vendors,” said Bronwen, "but it seems that the community is really powerful, and is certainly one of the most unique aspects that sets Drupal aside from other CMS options. Even a few vendors recommended by the community, or a recommend Drupal 101 lesson where you can go through it, go off and work in Drupal, and come back and get Drupal 201 would be really valuable for me.

“I know that there are local Drupal meet-ups that happen all over the country” Bronwen added. “[One group we talked with] told us that nonprofits can go to these events and say “I need this or that,” and some hardcore Drupal techie will take the work on pro bono. That was another factor that helped draw us to using Drupal — the availability of the community. It would be useful if there was more information on how to tap into those meetups, perhaps, when they’re happening."

Bronwen knows that the Drupal community is really powerful, and considers it one of the most unique aspects that sets Drupal aside from other CMS options. She is excited by the availability of the Drupal community, and is looking forward to interacting with it and working with them as she continues to run and improve PEAK Parent Center’s website.

Personal blog tags: drupal.org user researchpersona interviews
Categories: FLOSS Project Planets

Philippe Normand: Web Engines Hackfest 2014

Planet Python - Mon, 2014-12-15 11:51

Last week I attended the Web Engines Hackfest. The event was sponsored by Igalia (also hosting the event), Adobe and Collabora.

As usual I spent most of the time working on the WebKitGTK+ GStreamer backend and Sebastian Dröge kindly joined and helped out quite a bit, make sure to read his post about the event!

We first worked on the WebAudio GStreamer backend, Sebastian cleaned up various parts of the code, including the playback pipeline and the source element we use to bridge the WebCore AudioBus with the playback pipeline. On my side I finished the AudioSourceProvider patch that was abandoned for a few months (years) in Bugzilla. It’s an interesting feature to have so that web apps can use the WebAudio API with raw audio coming from Media elements.

I also hacked on GstGL support for video rendering. It’s quite interesting to be able to share the GL context of WebKit with GStreamer! The patch is not ready yet for landing but thanks to the reviews from Sebastian, Mathew Waters and Julien Isorce I’ll improve it and hopefully commit it soon in WebKit ToT.

Sebastian also worked on Media Source Extensions support. We had a very basic, non-working, backend that required… a rewrite, basically :) I hope we will have this reworked backend soon in trunk. Sebastian already has it working on Youtube!

The event was interesting in general, with discussions about rendering engines, rendering and JavaScript.

Categories: FLOSS Project Planets

Django Weblog: Announcing a redesign of the Django websites

Planet Python - Mon, 2014-12-15 11:08

The Django project is excited to announce that after many years, we're launching a redesign of our primary website, our documentation site and our issue tracker.

Django's website has been largely unchanged since the project was launched back in 2005, so you can imagine how excited we are to update it. The original design was created by Wilson Miner while he was working at the Lawrence Journal-World, the newspaper at which Django was created.. Wilson's design has held up incredibly well over the years, but new design aesthetics, and technologies such as mobile devices, web fonts, HTML5 and CSS3 have drastically changed the way websites are built.

The old design was also focused on introducing a new web framework to the world. Django is now a well-established framework, so the website has a much broader audience -- not just new Django users, but established users, managers, and people new to programming. This redesign also allows us to shine a spotlight on areas of the community that have historically been hidden, such as the Django Software Foundation (DSF), the community of projects that support Django developers (such as people.djangoproject.com and djangopackages.com), and the various educational and consulting resources that exist in our community.

This redesign is the result of multiple attempts and the collaboration of a number of groups and individuals. Work on the redesign started in 2010. Initially, a number of people (including Christian Metts and Julien Phalip) tried to produce a new design as individual efforts; however, these efforts stalled due to a lack of momentum. In 2012, the DSF developed a design brief and put out a call for a volunteer team to redesign the site. The DSF received a number of applicants and selected interactive agency Threespot to complete the design task. For a number of reasons (almost entirely the DSF's fault), this design got most of the way to completion, but not 100% complete.

Earlier this year, Andrew McCarthy took on the task of completing the design work including a style guide for future expansions. The design was then handed over to the DSF's website working group to convert that website into working code.

Since everyone is a volunteer on this team we'd like to name them individually: Adrian Holovaty, Audrey Roy, Aymeric Augustin, Baptiste Mispelon, Daniel Roy Greenfeld, Elena Williams, Jannis Leidel, Ola Sitarska, Ola Sendecka, Russell Keith-Magee, Tomek Paczkowski and Trey Hunner. One of the DSF's current fellows Tim Graham also helped by finding bugs and reviewing tickets. Of course we couldn't have done it without the backing of the DSF board of directors over the years.

Now we'd like to invite you to share in the result of our efforts and help us making it even better. Please test-drive the site and let us know what you think.

If you find a bug -- which we're sure some will -- open a ticket on the website's issue tracker. If you want to contribute directly to the site's code please don't hesitate to join us on Freenode in the channel #django-websites.

We also wouldn't mind if you'd tell us about your experience on Twitter using the hashtag #10YearsLater, or by tweeting at @djangoproject.

So now, without further ado, please check out the new site djangoproject.com, the documentation docs.djangoproject.com and our issue tracker code.djangoproject.com.

That's all for now. Happy coding, everyone!

...and we'll see you all again in 2023 when we launch our next redesign :-)

Categories: FLOSS Project Planets

Midwestern Mac, LLC: Highly-Available PHP infrastructure with Ansible

Planet Drupal - Mon, 2014-12-15 11:03

I just posted a large excerpt from Ansible for DevOps over on the Server Check.in blog: Highly-Available Infrastructure Provisioning and Configuration with Ansible. In it, I describe a simple set of playbooks that configures a highly-available infrastructure primarily for PHP-based websites and web applications, using Varnish, Apache, Memcached, and MySQL, each configured in a way optimal for high-traffic and highly-available sites.

Here's a diagram of the ultimate infrastructure being built:

Categories: FLOSS Project Planets

SitePoint PHP Drupal: AngularJS in Drupal Apps

Planet Drupal - Mon, 2014-12-15 11:00

Angular.js is the hot new thing right now for designing applications in the client. Well, it’s not so new anymore but is sure as hell still hot, especially now that it’s being used and backed by Google. It takes the idea of a JavaScript framework to a whole new level and provides a great basis for developing rich and dynamic apps that can run in the browser or as hybrid mobile apps.

In this article I am going to show you a neat little way of using some of its magic within a Drupal 7 site. A simple piece of functionality but one that is enough to demonstrate how powerful Angular.js is and the potential use cases even within heavy server-side PHP frameworks such as Drupal. So what are we doing?

We are going to create a block that lists some node titles. Big whoop. However, these node titles are going to be loaded asynchronously using Angular.js and there will be a textfield above them to filter/search for nodes (also done asyncronously). As a bonus, we will also use a small open source Angular.js module that will allow us to view some of the node info in a dialog when we click on the titles.

Continue reading %AngularJS in Drupal Apps%

Categories: FLOSS Project Planets

Logilab: Generate stats from your SaltStack infrastructure

Planet Python - Mon, 2014-12-15 10:52

As presented at the November french meetup of saltstack users, we've published code to generate some statistics about a salstack infrastructure. We're using it, for the moment, to identify which parts of our infrastructure need attention. One of the tools we're using to monitor this distance is munin.

You can grab the code at bitbucket salt-highstate-stats, fork it, post issues, discuss it on the mailing lists.

If you're french speaking, you can also read the slides of the above presentation (mirrored on slideshare).

Hope you find it useful.

Categories: FLOSS Project Planets

Cheppers blog: Apache Solr and Drupal - Part I: Set up Apache Solr to enhance Drupal search

Planet Drupal - Mon, 2014-12-15 10:40

Today most of the websites have search functionality. With the help of Apache Solr the time spent on waiting for a search result can be radically reduced. In this article we are going to set up a basic searching infrastructure on a *nix-based system.

Categories: FLOSS Project Planets

Drupal Commerce: Major improvements in addressfield 7.x-1.0-rc1

Planet Drupal - Mon, 2014-12-15 10:39

Many people know that addressfield hasn’t been the easiest module to maintain. There are over 200 countries in the world, each with its own addressing requirements. Addressfield attempted to provide a sane default for all of them, along with a plugin architecture for handling per-country customizations. But with so many countries, the list of needed improvements became never-ending, and the customizations themselves started gathering in only one plugin (address.inc), which quickly became impossible to maintain.

A radical change was needed, so after a lot of research we introduced a new plan for Drupal 8, along with a brand new PHP library we can depend on from addressfield 8.x-2.x. The new plan resolves around two powerful ideas:

  • The introduction of address formats, which hold information on how a country’s address and its form need to be rendered and validated.
  • The use of Google’s addressing dataset, freely available and built for Chrome and Android, with address formats for 200 countries.

The introduced solutions were obviously superior to anything we had before that, but Drupal 8 is still far from production, and we needed improvements on our Drupal 7 sites today, so we decided to try and backport as many concepts as we could into the 7.x-1.x codebase. The result of that is addressfield 7.x-1.0-rc1:

Read more...

Categories: FLOSS Project Planets

Annertech: Best Modules for Media in Drupal: How to Install and Configure Scald

Planet Drupal - Mon, 2014-12-15 10:19
Best Modules for Media in Drupal: How to Install and Configure Scald

In the first part of this series, “Scalable & Sustainable Media Management for Drupal Websites”, I talked about media management solutions for Drupal. Specifically, I am interested in managing large amounts of files in a reusable manner. The solution I like best at the moment is Scald.

Just so we don't get confused with some phrasing, Scald stores all media items as custom entities called "atoms"; Scald "contexts" are very similar to view modes.

Categories: FLOSS Project Planets

Flash not working in iceweasel

LinuxPlanet - Mon, 2014-12-15 09:30
If the some update of iceweasel has broken the flash in iceweasel,and all flash sites stop working , we might see the following message in the page Tool->Addons->plugins

shockwave flash is known to be vulnerable

The link given in the page to update the flash might be broken.

The workaround for this is to update the flash with the latest one from

https://get.adobe.com/flashplayer/

Select the .tar.gz version for debian systems. Close all instances of iceweasel and Untar the downloaded package.

$ tar -xzvf install_flash_player_11_linux.i386.tar.gz

After the untar, we will get a folder named usr, a file libflashplayer.so and a file readme.txt.

We need to copy the file libflashplayer.so to the folder which contains the plugins for iceweasel.

$ cp libflashplayer.so /usr/lib/iceweasel/.

For mozilla copy it to .

$cp libflashplayer.so /usr/lib/mozilla/plugins/libflashplayer.so

Now launch iceweasel and the problem with flash should not occur.
Categories: FLOSS Project Planets

PyTennessee: Young Coder tickets on Sale today at 12PM CST (noon)

Planet Python - Mon, 2014-12-15 09:04

Taught by the AMAZING Katie Cunningham!

Having lead young coders tutorials at PyCon, PyOhio, PyTennessee, and others, we are beyond thrilled to welcome her to our little event. Learn more about our young coders event at our website. Registration for young coders will be free, and tickets will be available December 15th.

Katie Cunningham is a fervent advocate for Python, Open Source Software, and teaching more people how to program.  She’s a frequent speaker at open source conferences, such as PyCon and DjangoCon, speaking on beginners topics such as someone’s first site in the cloud, and making a site that is accessible to everyone.

She also helps organize PyLadies in the DC area, a program designed to increase diversity in the Python community. She has taught classes for the organization, bringing novices from instillation to writing their first app in 48 hours.

Katie is an active blogger at her website (http://therealkatie.net), covering issues such as Python, accessibility, and the trials and tribulations of working from home.

Katie is also the author of Python in 24 Hours and the Accessibility Handbook

A few pictures from last years event: https://www.flickr.com/photos/adamwfletcher/sets/72157641721689104/

A Post by Google about it: http://google-opensource.blogspot.com/2014/03/teaching-next-generation-to-code-young.html

My favorite a mothers account: http://www.mommie2zs.com/2014/02/23/turning-the-tide/

We are still looking for more sponsors for young coders, so if you are interested you can learn more at out website.

Categories: FLOSS Project Planets

Mike Driscoll: PyDev of the Week: Tim Roberts

Planet Python - Mon, 2014-12-15 08:30

The PyDev of the Week this week is Tim Roberts. While we’ve never met in person, he’s helped me a time or two on several of the Python 3rd party mailing lists, such as wxPython, Reportlab and (I think) PyWin32. He runs his own consulting business and can be contacted by email at the following: timr@probo.com. Let’s spend some time getting to know him!

Can you tell us a little about yourself (hobbies, education, etc):

I have a BS in Computer Science from Oregon State University (go Beavers!).  I spent 10 years doing mainframe operating system programming for Control Data before switching to Windows drivers during the Windows 3.0 beta.  I’ve been doing Windows driver and driver support work (like diagnostics and support apps) since that time.

I’ve been an owner in a small consulting business for 20 years now.  That’s a pretty good record for a small business.  Besides the 3 owners, we have 5 employees, so I guess we are “job creators”.

When I’m not in front of my keyboard, I am a musician.  I play clarinet in a community band, and I play piano to accompany several of the local high school choirs.  I’m a Broadway musical nut, so I play in the pit orchestras for the musicals for three of the local school districts.  I’ll be doing “Peter Pan” and “Shrek” this spring.  I’ve also been a square dancer for nearly 25 years, and I’m part of the state federation of square dance clubs.

 

Why did you start using Python?

I’ve always been an explorer.  I want me computer to do tedious and repetitive things for me.  I was moderately adept at using Perl to automate my environment, but I hated the fact that Perl programs look like RS-232 line noise, and that it is impossible to understand a Perl program six weeks after you wrote it.  When I saw Python 1.6 in 2000, I was immediately hooked by the human-readable syntax, and by the large and functional standard library.  At that time, we were administering our own Linux servers, trying to come up with reasonable (and affordable) ways to manage email.  I wrote a couple of fully-featured email packages in Python, in part because the tools were so good.

Many programmers underestimate the importance of readable code.  After all, as the joke goes, if it was hard to write, it should be hard to read.  After all, why do they call it “code”?  But the fact is that most programs are read MUCH more often than they are written.  Python, with its non-punctuation-based syntax, makes it much easier to write programs that can be read as prose.

 

What other programming languages do you know and which is your favorite?

As a rule, the work I deliver to my clients is in C++ and C, because that’s where the driver world lives.  I’ve done a large amount of assembler work for a strange variety of processors, plus C# and and a lot of Javascript.  When I was with CDC, I was one of the best Fortran coders around, but that knowledge has all leaked away.  I could probably still write COBOL and Basic if I had to.  I was also a huge fan of Borland’s Delphi.  I used to do all of my tools in Delphi, but I haven’t done any Delphi programming since Python came into my life.

Python is my favorite.  When doing driver work, it’s often necessary to have diagnostic suites to exercise the hardware and the driver interfaces.  Python is great for that, because it’s so easy to write an extensible command interpreter.

I’ve tried to learn F#.  I think functional languages have great concepts to teach us, and I like the functional extensions that have been made to Python.  However, F# suffers from the same “punctuation as syntax” disease that makes Perl so impossible to read.

 

What projects are you working on now?

I have several long-term things going on, although none of them are Python-related.  I’m doing video capture drivers for some custom USB cameras.  I’m doing video capture drivers for a big 7-foot LCD panel system for corporate conference rooms, where you plug in your laptop and share it with the room.  I’m doing capture drivers for a medical equipment manufacturer who makes ultrasound cardiac monitors.  The output of an ultrasound happens to look a lot like video capture, so I was a good fit.  I’m doing drivers for a depth-sensing camera system somewhat like the Kinect.

One of the advantages of being an independent consulting is that the days are never boring.  I get exposed to an incredible variety of things, from video capture to audio response curves to laser optics to real-time telemetry to high-speed USB devices to performance analysis to network optimization, etc.  When I get up in the morning, I look forward to coming in to work.


Which Python libraries are your favorite (core or 3rd party)?


Gosh, there are so many.  I think the library I use most often is PIL, the Python Imaging Library.  I use it as a scriptable version of PhotoShop.  As a dinosaur, I live in a command line.  I’m always popping in to the command line interpreter to look at the size of an image, or change a format, or make a thumbnail.  As a musician, I scan my music to put it on a Samsung Galaxy Tab 12.2″ tablet for performance, and I often need to tweak the balance contrast do a threshold operation the same way on many pages at a time.  PIL does that for me.

However, the most fun I’ve had with Python has usually involved NumPy and its friends, including SciPy and matplotlib.  I do a lot of projects with telemetry and heavy data analysis.  The scientists and designers often give me analysis algorithms in Matlab.  I’ve become quite adept at translating Matlab to NumPy, and I love the power and flexibility.

ReportLab is another library that gets a lot of use from me.  I take great pride in making printed material look good, and ReportLab lets me make fine-tuned PDF files that will look good no matter where they are displayed and printed.

Is there anything else you’d like to say?


As someone with 40 years of programming experience under his belt, I feel like I ought to have profound, sage advice for all the young whippersnappers who are just getting started, but I think it’s all been said.  Python has a great community, and that community is an incredibly valuable resource for beginners.  Don’t be afraid to use it.

Thank you!

Previous PyDevs of the Week
Categories: FLOSS Project Planets

Jean-Baptiste Onofré: Apache Karaf Christmas gifts: docker.io, profiles, and decanter

Planet Apache - Mon, 2014-12-15 08:12

We are heading to Christmas time, and the Karaf team wanted to prepare some gifts for you

Of course, we are working hard in the preparation of the new Karaf releases. A bunch of bug fixes and improvements will be available in the coming releases: Karaf 2.4.1, Karaf 3.0.3, and Karaf 4.0.0.M2.

Some sub-project releases are also in preparation, especially Cellar. We completely refactored Cellar internals, to provide a more reliable, predictable, and stable behavior. New sync policies are available, new properties, new commands, and also interesting new features like HTTP session replication, or HTTP load balancing. I will prepare a blog about this very soon.

But, we’re also preparing brand-new features.

Docker.io

I heard some people saying: “why do I need Karaf when I have docker.io ?”.

Honestly, I don’t understand this as the purpose is not the same: actually, Karaf on docker.io is a great value.

First, docker.io concepts are not new. It’s more or less new on Linux, but the same kind of features exists for a long time on other systems:

  • zones on Solaris
  • jail on FreeBSD
  • xen on Linux, in the past

So, nothing revolutionary in docker.io, however it’s a very convenient way to host multiple images/pseudo-system on the same machine.

However, docker.io (like the other systems) is focus on the OS: it doesn’t cover by its own the application container. For that, you have to prepare an images with OS plus the application container. For instance, you want to deploy your war file, you have to bootstrap a docker.io image with OS and tomcat (or Karaf ;)).

Moreover, remember the cool features provided by Karaf: ConfigAdmin and dynamic configuration, hotdeployment, features, etc.

You want to deploy your Camel routes, your ActiveMQ broker, your CXF webservices, your application: just use the docker.io image providing a Karaf instance!

And it’s what the Karaf docker.io feature provides. Actually, it provides two things:

  • a set of Karaf docker.io images ready to use, with ubuntu/centos images with ready to use Karaf instances (using different combinations)
  • a set of shell commands and Karaf commands to easily bootstrap the images from a Karaf instance. It’s actually a good alternative to the Karaf child instances (which are only local to the machine).

Basically, docker.io doesn’t replace Karaf. However, Karaf on docker.io provides a very flexible infrastructure, allowing you to easily bootstrap Karaf instances. Associated with Cellar, you can bootstrap a Karaf cluster very easily as well.

I will prepare the donation and I will blog about the docker.io feature very soon. Stay tuned !!!

Karaf Profiles

A new feature comes in Karaf 4: the Karaf profiles. The purpose is to apply a ready to use set of configurations and provisioning to a Karaf instance.

Thanks to that you can prepare a complete profile containing your configuration and your application (features) and apply multiple profiles to easily create a ready-to-go Karaf instance.

It’s a great complete to the Karaf docker.io feature: the docker.io feature bootstraps the Karaf image, on which you can apply your profiles, all in a row.

Some profiles description is available here: http://mail-archives.apache.org/mod_mbox/karaf-dev/201412.mbox/%3CCAA66TpodJWHVpOqDz2j1QfkPchhBepK_Mwdx0orz7dEVaw8tPQ%40mail.gmail.com%3E.

I’m working on the storage of profiles on Karaf Cave, the application of profiles on running/existing Karaf instances, support of cluster profiles in Cellar, etc.

Again, I will create a specific blog post about profiles soon. Stay tuned again !!

Karaf Decanter

As a fully enterprise ready container, Karaf has to provide monitoring and management feature. We already provide a bunch of metrics via JMX (we have multiple MBeans for Karaf, Camel, ActiveMQ, CXF, etc).

However, we should provide:

  • storage of metrics and messages to be able to have an activity timeline
  • SLA definition of the metrics and messages, raising alerts when some metrics are not in the expected value range or when the messages contain a pattern
  • dashboard to configure the SLA, display messages, and graph the metrics

As always in Karaf, it should be very simple to install such kind of feature, with an integration of the supported third parties.

That’s why we started to work on Karaf Decanter, a complete and flexible monitoring solution for Karaf and the applications hosted by Karaf (Camel, ActiveMQ, CXF, etc).

The Decanter proposal and description is available here: http://mail-archives.apache.org/mod_mbox/karaf-dev/201410.mbox/%3C543D3D62.6050608%40nanthrax.net%3E.

The current codebase is also available: https://github.com/jbonofre/karaf-decanter.

I’m preparing the donation (some cleansing/polishing in progress).

Again, I will blog about Karaf Decanter asap. Stay tuned again again !!

Conclusion

You can see like, as always, the Karaf team is committed and dedicated to provide to you very convenient and flexible features. Lot of those features come from your ideas, discussions, proposals. So, keep on discussing with us, we love our users

We hope you will enjoy those new features. We will document and blog about these Christmas gifts soon.

Enjoy Karaf, and Happy Christmas !

Categories: FLOSS Project Planets

Europython: EuroPython 2015: Submitted Proposal

Planet Python - Mon, 2014-12-15 05:33

The EuroPython Society (EPS) is happy to announce that we have received the amended proposal from the ACPySS team in Spain to hold EuroPython 2015 in Bilbao, Spain. We had been discussing questions with them in the last couple of days.

On-site Team Proposal

Following the CFP process, we are now publishing the redacted version of the proposal, which has all the confidential information removed:

The EPS board will now do a final review and announce the decision in the next two weeks (2014-12-26 latest).

Please send your feedback

We would like to encourage feedback from the EuroPython and Python community regarding the proposal. Please send your emails to the europython-improve mailing list (you will have to sign up to that list before being able to post there).

Thank you,

EuroPython Society

Categories: FLOSS Project Planets

EuroPython Society: EuroPython 2015: Submitted Proposal

Planet Python - Mon, 2014-12-15 05:32

The EuroPython Society (EPS) is happy to announce that we have received the amended proposal from the ACPySS team in Spain to hold EuroPython 2015 in Bilbao, Spain. We had been discussing questions with them in the last couple of days.

On-site Team Proposal

Following the CFP process, we are now publishing the redacted version of the proposal, which has all the confidential information removed:

The EPS board will now do a final review and announce the decision in the next two weeks (2014-12-26 latest).

Please send your feedback

We would like to encourage feedback from the EuroPython and Python community regarding the proposal. Please send your emails to the europython-improve mailing list (you will have to sign up to that list before being able to post there).

Thank you,

EuroPython Society

Categories: FLOSS Project Planets

Thomas Goirand: Supporting 3 init systems in OpenStack packages

Planet Debian - Mon, 2014-12-15 03:15

tl;dr: Providing support for all 3 init systems (sysv-rc, Upstart and systemd) isn’t hard, and generating the init scripts / Upstart job / systemd using a template system is a lot easier than I previously thought.

As always, when writing this kind of blog post, I do expect that others will not like what I did. But that’s the point: give me your opinion in a constructive way (please be polite even if you don’t like what you see… I had too many times had to read harsh comments), and I’ll implement your ideas if I find it nice.

History of the implementation: how we came to the idea

I had no plan to do this. I don’t believe what I wrote can be generalized to all of the Debian archive. It’s just that I started doing things, and it made sense when I did it. Let me explain how it happened.

Since it’s clear that many, and especially the most advanced one, may have an opinion about which init system they prefer, and because I also support Ubuntu (at least Trusty), I though it was a good idea to support all the “main” init system: sysv-rc, Upstart and systemd. Though I have counted (for the sake of being exact in this blog) : OpenStack in Debian contains currently 64 init scripts to run daemons in total. That’s quite a lot. A way too much to just write them, all by hand. Though that’s what I was doing for the last years… until this the end of this last summer!

So, doing all by hand, I first started implementing Upstart. Its support was there only when building in Ubuntu (which isn’t the correct thing to do, this is now fixed, read further…). Then we thought about adding support for systemd. Gustavo Panizzo, one of the contributors in the OpenStack packages, started implementing it in Keystone (the auth server for OpenStack) for the Juno release which was released this October. He did that last summer, early enough so we didn’t expect anyone to use the Juno branch Keystone. After some experiments, we had kind of working. What he did was invoking “/etc/init.d/keystone start-systemd”, which was still using start-stop-daemon. Yes, that’s not perfect, and it’s better to use systemd foreground process handling, but at least, we had a unique place where to write the startup scripts, where we check the /etc/default for the logging configuration, configure the log file, and so on.

Then around in october, I took a step backward to see the whole picture with sysv-rc scripts, and saw the mess, with all the tiny, small difference between them. It became clear that I had to do something to make sure they were all the same, with the support for the same things (like which log system to use, where to store the PID, create /var/lib/project, /var/run/project and so on…).

Last, on this month of December, I was able to fix the remaining issues for systemd support, thanks to the awesome contribution of Mikael Cluseau on the Alioth OpenStack packaging list. Now, the systemd unit file is still invoking the init script, but it’s not using start-stop-daemon anymore, no PID file involved, and daemons are used as systemd foreground processes. Finally, daemons service files are also activated on installation (they were not previously).

Implementation

So I took the simplistic approach to use always the same template for the sysv-rc switch/case, and the start and stop functions, happening it at the end of all debian/*.init.in scripts. I started to try to reduce the number of variables, and I was surprised of the result: only a very small part of the init scripts need to change from daemon to daemon. For example, for nova-api, here’s the init script (LSB header stripped-out):

DESC="OpenStack Compute API" PROJECT_NAME=nova NAME=${PROJECT_NAME}-api

That is it: only 3 lines, defining only the name of the daemon, the name of the project it attaches (eg: nova, cinder, etc.), and a long description. There’s of course much more complicated init scripts (see the one for neutron-server in the Debian archive for example), but the vast majority only needs the above.

Here’s the sysv-rc init script template that I currently use:

#!/bin/sh # The content after this line comes from openstack-pkg-tools # and has been automatically added to a .init.in script, which # contains only the descriptive part for the daemon. Everything # else is standardized as a single unique script. # Author: Thomas Goirand <zigo@debian.org> # PATH should only include /usr/* if it runs after the mountnfs.sh script PATH=/sbin:/usr/sbin:/bin:/usr/bin if [ -z "${DAEMON}" ] ; then DAEMON=/usr/bin/${NAME} fi PIDFILE=/var/run/${PROJECT_NAME}/${NAME}.pid if [ -z "${SCRIPTNAME}" ] ; then SCRIPTNAME=/etc/init.d/${NAME} fi if [ -z "${SYSTEM_USER}" ] ; then SYSTEM_USER=${PROJECT_NAME} fi if [ -z "${SYSTEM_USER}" ] ; then SYSTEM_GROUP=${PROJECT_NAME} fi if [ "${SYSTEM_USER}" != "root" ] ; then STARTDAEMON_CHUID="--chuid ${SYSTEM_USER}:${SYSTEM_GROUP}" fi if [ -z "${CONFIG_FILE}" ] ; then CONFIG_FILE=/etc/${PROJECT_NAME}/${PROJECT_NAME}.conf fi LOGFILE=/var/log/${PROJECT_NAME}/${NAME}.log if [ -z "${NO_OPENSTACK_CONFIG_FILE_DAEMON_ARG}" ] ; then DAEMON_ARGS="${DAEMON_ARGS} --config-file=${CONFIG_FILE}" fi # Exit if the package is not installed [ -x $DAEMON ] || exit 0 # If ran as root, create /var/lock/X, /var/run/X, /var/lib/X and /var/log/X as needed if [ "x$USER" = "xroot" ] ; then for i in lock run log lib ; do mkdir -p /var/$i/${PROJECT_NAME} chown ${SYSTEM_USER} /var/$i/${PROJECT_NAME} done fi # This defines init_is_upstart which we use later on (+ more...) . /lib/lsb/init-functions # Manage log options: logfile and/or syslog, depending on user's choosing [ -r /etc/default/openstack ] && . /etc/default/openstack [ -r /etc/default/$NAME ] && . /etc/default/$NAME [ "x$USE_SYSLOG" = "xyes" ] && DAEMON_ARGS="$DAEMON_ARGS --use-syslog" [ "x$USE_LOGFILE" != "xno" ] && DAEMON_ARGS="$DAEMON_ARGS --log-file=$LOGFILE" do_start() { start-stop-daemon --start --quiet --background ${STARTDAEMON_CHUID} --make-pidfile --pidfile ${PIDFILE} --chdir /var/lib/${PROJECT_NAME} --startas $DAEMON \ --test > /dev/null || return 1 start-stop-daemon --start --quiet --background ${STARTDAEMON_CHUID} --make-pidfile --pidfile ${PIDFILE} --chdir /var/lib/${PROJECT_NAME} --startas $DAEMON \ -- $DAEMON_ARGS || return 2 } do_stop() { start-stop-daemon --stop --quiet --retry=TERM/30/KILL/5 --pidfile $PIDFILE RETVAL=$? rm -f $PIDFILE return "$RETVAL" } do_systemd_start() { exec $DAEMON $DAEMON_ARGS } case "$1" in start) init_is_upstart > /dev/null 2>&1 && exit 1 log_daemon_msg "Starting $DESC" "$NAME" do_start case $? in 0|1) log_end_msg 0 ;; 2) log_end_msg 1 ;; esac ;; stop) init_is_upstart > /dev/null 2>&1 && exit 0 log_daemon_msg "Stopping $DESC" "$NAME" do_stop case $? in 0|1) log_end_msg 0 ;; 2) log_end_msg 1 ;; esac ;; status) status_of_proc "$DAEMON" "$NAME" && exit 0 || exit $? ;; systemd-start) do_systemd_start ;; restart|force-reload) init_is_upstart > /dev/null 2>&1 && exit 1 log_daemon_msg "Restarting $DESC" "$NAME" do_stop case $? in 0|1) do_start case $? in 0) log_end_msg 0 ;; 1) log_end_msg 1 ;; # Old process is still running *) log_end_msg 1 ;; # Failed to start esac ;; *) log_end_msg 1 ;; # Failed to stop esac ;; *) echo "Usage: $SCRIPTNAME {start|stop|status|restart|force-reload|systemd-start}" >&2 exit 3 ;; esac exit 0

Nothing particularly fancy here… You’ll noticed that it’s really OpenStack centric (see the LOGFILE and CONFIGFILE things…). You may have also noticed the call to “init_is_upstart” which is needed for upstart support. I’m not sure if it’s at the correct place in the init script. Should I put that on top of the script? Was I right with the exit values for it? Please send me your comments…

Then I thought about generalizing all of this. Because not only the sysv-rc scripts needed to be squared-up, but also Upstart. The approach here was to source the sysv-rc script in debian/*.init.in, and then generate the Upstart job accordingly, using the above 3 variables (or more as needed). Here, the fun is that, instead of taking the approach of calculating everything at runtime with the sysv-rc, for Upstart jobs, many things are calculated at build time. For each debian/*.init.in script that the debian/rules finds, pkgos-gen-upstart-job is called. Here’s pkgos-gen-upstart-job:

#!/bin/sh INIT_TEMPLATE=${1} UPSTART_FILE=`echo ${INIT_TEMPLATE} | sed 's/.init.in/.upstart/'` # Get the variables defined in the init template . ${INIT_TEMPLATE} ## Find out what should go in After= #SHOULD_START=`cat ${INIT_TEMPLATE} | grep "# Should-Start:" | sed 's/# Should-Start://'` # #if [ -n "${SHOULD_START}" ] ; then # AFTER="After=" # for i in ${SHOULD_START} ; do # AFTER="${AFTER}${i}.service " # done #fi if [ -z "${DAEMON}" ] ; then DAEMON=/usr/bin/${NAME} fi PIDFILE=/var/run/${PROJECT_NAME}/${NAME}.pid if [ -z "${SCRIPTNAME}" ] ; then SCRIPTNAME=/etc/init.d/${NAME} fi if [ -z "${SYSTEM_USER}" ] ; then SYSTEM_USER=${PROJECT_NAME} fi if [ -z "${SYSTEM_GROUP}" ] ; then SYSTEM_GROUP=${PROJECT_NAME} fi if [ "${SYSTEM_USER}" != "root" ] ; then STARTDAEMON_CHUID="--chuid ${SYSTEM_USER}:${SYSTEM_GROUP}" fi if [ -z "${CONFIG_FILE}" ] ; then CONFIG_FILE=/etc/${PROJECT_NAME}/${PROJECT_NAME}.conf fi LOGFILE=/var/log/${PROJECT_NAME}/${NAME}.log DAEMON_ARGS="${DAEMON_ARGS} --config-file=${CONFIG_FILE}" echo "description \"${DESC}\" author \"Thomas Goirand <zigo@debian.org>\" start on runlevel [2345] stop on runlevel [!2345] chdir /var/run pre-start script for i in lock run log lib ; do mkdir -p /var/\$i/${PROJECT_NAME} chown ${SYSTEM_USER} /var/\$i/${PROJECT_NAME} done end script script [ -x \"${DAEMON}\" ] || exit 0 DAEMON_ARGS=\"${DAEMON_ARGS}\" [ -r /etc/default/openstack ] && . /etc/default/openstack [ -r /etc/default/\$UPSTART_JOB ] && . /etc/default/\$UPSTART_JOB [ \"x\$USE_SYSLOG\" = \"xyes\" ] && DAEMON_ARGS=\"\$DAEMON_ARGS --use-syslog\" [ \"x\$USE_LOGFILE\" != \"xno\" ] && DAEMON_ARGS=\"\$DAEMON_ARGS --log-file=${LOGFILE}\" exec start-stop-daemon --start --chdir /var/lib/${PROJECT_NAME} \\ ${STARTDAEMON_CHUID} --make-pidfile --pidfile ${PIDFILE} \\ --exec ${DAEMON} -- --config-file=${CONFIG_FILE} \${DAEMON_ARGS} end script " >${UPSTART_FILE}

The only thing which I don’t know how to do, is how to implement the Should-Start / Should-Stop in an Upstart job. Can anyone shoot me a mail and tell me the solution?

Then, I wanted to add support for systemd. Here, we cheated, since we only just called the sysv-rc script from the systemd unit, however, the systemd-start target uses exec, so the process stays in the foreground. It’s also much smaller than the Upstart thing. However, here, I could implement the “After” thing, corresponding to the Should-Start:

#!/bin/sh INIT_TEMPLATE=${1} SERVICE_FILE=`echo ${INIT_TEMPLATE} | sed 's/.init.in/.service/'` # Get the variables defined in the init template . ${INIT_TEMPLATE} if [ -z "${SCRIPTNAME}" ] ; then SCRIPTNAME=/etc/init.d/${NAME} fi if [ -z "${SYSTEM_USER}" ] ; then SYSTEM_USER=${PROJECT_NAME} fi if [ -z "${SYSTEM_GROUP}" ] ; then SYSTEM_GROUP=${PROJECT_NAME} fi # Find out what should go in After= SHOULD_START=`cat ${INIT_TEMPLATE} | grep "# Should-Start:" | sed 's/# Should-Start://'` if [ -n "${SHOULD_START}" ] ; then AFTER="After=" for i in ${SHOULD_START} ; do AFTER="${AFTER}${i}.service " done fi echo "[Unit] Description=${DESC} $AFTER [Service] User=${SYSTEM_USER} Group=${SYSTEM_GROUP} WorkingDirectory=/var/lib/${PROJECT_NAME} PermissionsStartOnly=true ExecStartPre=/bin/mkdir -p /var/lock/${PROJECT_NAME} /var/log/${PROJECT_NAME} /var/lib/${PROJECT_NAME} ExecStartPre=/bin/chown ${SYSTEM_USER}:${SYSTEM_GROUP} /var/lock/${PROJECT_NAME} /var/log/${PROJECT_NAME} /var/lib/${PROJECT_NAME} ExecStart=${SCRIPTNAME} systemd-start Restart=on-failure [Install] WantedBy=multi-user.target " >${SERVICE_FILE}

As you can see, it’s calling /etc/init.d/${SCRIPTNAME} sytemd-start, which isn’t great. I’d be happy to have comments from systemd user / maintainers on how to fix it to make it better.

Integrating in debian/rules

To integrate with the Debian package build system, we only need had to write this:

override_dh_installinit: # Create the init scripts from the template for i in `ls -1 debian/*.init.in` ; do \ MYINIT=`echo $$i | sed s/.init.in//` ; \ cp $$i $$MYINIT.init ; \ cat /usr/share/openstack-pkg-tools/init-script-template >>$$MYINIT.init ; \ pkgos-gen-systemd-unit $$i ; \ done # If there's an upstart.in file, use that one instead of the generated one for i in `ls -1 debian/*.upstart.in` ; do \ MYPKG=`echo $$i | sed s/.upstart.in//` ; \ cp $$MYPKG.upstart.in $$MYPKG.upstart ; \ done # Generate the upstart job if there's no already existing .upstart.in for i in `ls debian/*.init.in` ; do \ MYINIT=`echo $$i | sed s/.init.in/.upstart.in/` ; \ if ! [ -e $$MYINIT ] ; then \ pkgos-gen-upstart-job $$i ; \ fi \ done dh_installinit --error-handler=true # Generate the systemd unit file # Note: because dh_systemd_enable is called by the # dh sequencer *before* dh_installinit, we have # to process it manually. for i in `ls debian/*.init.in` ; do \ pkgos-gen-systemd-unit $$i ; \ MYSERVICE=`echo $$i | sed 's/debian\///'` ; \ MYSERVICE=`echo $$MYSERVICE | sed 's/.init.in/.service/'` ; \ dh_systemd_enable $$MYSERVICE ; \ done

As you can see, it’s possible to use a debian/*.upstart.in and not use the templating system, in the more complicated case (I needed it mostly for neutron-server and neutron-plugin-openvswitch-agent).

Conclusion

I do not pretend that what I wrote in the openstack-pkg-tools is the ultimate solution. But I’m convince that it answers our own need as the OpenStack maintainers in Debian. There’s a lot of room for improvements (like implementing the Should-Start in Upstart jobs, or stop calling the sysv-rc script in the systemd units), but that this is a very good move that we did to use templates and generated scripts, as the init scripts are a way more easy to maintain now, in a much more unified way. As I’m not completely satisfied for the systemd and Upstart implementation, I’m sure that there’s already a huge improvements on the sysv-rc script maintainability.

Last and again: please send your comments and help improving the above! :)

Categories: FLOSS Project Planets

Liran Tal's Enginx: Drupal Performance Tip – “I’m too young to die” – know your DB engines

Planet Drupal - Mon, 2014-12-15 02:16
This entry is part 4 of 4 in the series Drupal Performance Tips

In the spirit of the computer video game Doom and its skill levels, we’ll review a few ways you can improve  your Drupal speed performance     and optimize for better results and server response time. These tips that we’ll cover may be at times specific to Drupal 6 versions, although     you can always learn the best practices from these examples and apply them on your own code base.

Doom skill levels: (easiest first)

1. I’m too young to die

2. Hey, not too rough

3. Hurt me plenty

4. Ultra-violence

5. Nightmare!

  This post is rated “I’m too young too die” difficulty level.

 

Drupal 6 shipped with all tables being MyISAM, and then Drupal 7 changed all that and shipped with all of its tables using the InnoDB database engine. Each one with its own strengths and weaknesses but it’s quite clear that InnoDB will probably perform better for your Drupal site (though it has quite a bit of fine tuning configuration to be tweaked on my.cnf).

Some modules, whether on Drupal 6, or those on Drupal 7 that simply upgraded but didn’t quite review all of their code, might ship with queries like SELECT COUNT() which if you have migrated your tables to InnoDB (or simply using Drupal 7) then this will hinder on database performance. That’s mainly because InnoDB and MyISAM work differently, and where-as this proved as quite a fast responding query being executed on a MyISAM database which uses the main index to store this information, for InnoDB the situation is different and will result in doing a full table scan for the count. Obviously, on an InnoDB configuration running such queries on large tables will result in very poor performance

Note to ponder upon – what about the Views module which uses similar type of COUNT() queries to create the pagination for its views?

(adsbygoogle = window.adsbygoogle || []).push({});

The post Drupal Performance Tip – “I’m too young to die” – know your DB engines appeared first on Liran Tal's Enginx.

Categories: FLOSS Project Planets
Syndicate content