FLOSS Project Planets

Pantheon Blog: An Example Repository to Build Drupal with Composer on Travis

Planet Drupal - Tue, 2015-06-30 12:59
A robust Continuous Integration system with good test coverage is the best way to ensure that your project remains maintainable; it is also a great opportunity to enhance your development workflow with Composer. Composer is a dependency management system that collects and organizes all of the software that your project needs in order to run. 
Categories: FLOSS Project Planets

Bryan Pendleton: Alternate sources for coverage of Oracle v Google?

Planet Apache - Tue, 2015-06-30 12:28

A reader (I have readers? Really? I thought it was just my mom and a couple old friends) asked me why I was pointing to Florian Mueller as a source for information on Oracle v Google.

I responded that I agreed that Mueller was pretty strongly biased, but, on the other hand, who else is covering this case?

I used to read Groklaw, but as far as I know they shut down two years ago.

Are there other sources for understanding the current state of Oracle v Google?

Please don't tell me to read the court proceedings themselves; yes, I know they are published and available, but no, that's just not helpful, sorry.

Categories: FLOSS Project Planets

Acquia: Caching to Improve Drupal Performance: The Three Levels You Should Know

Planet Drupal - Tue, 2015-06-30 12:07

In our continuing mission (well, not a mission; it’s actually a blog series) to help you improve your Drupal website, let’s look at the power of caching.

In our previous post, we debunked some too common Drupal performance advice. This time we're going positive, with a simple, rock-solid strategy to get you started: caching is the single best way to improve Drupal performance without having to fiddle with code.

At the basic level, it is easy enough for a non-technical user to implement. Advanced caching techniques might require some coding experience, but for most users, basic caching alone will bring about drastic performance improvements.

Caching in Drupal happens at three separate levels: application, component, and page. Let’s review each level in detail.

Application-level caching

This is the caching capability baked right into Drupal. You won't see it in action unless you dig deep into Drupal's internal code. It is enabled by default and won't ever show older, cached pages.

With application-level caching, Drupal essentially stores cached pages separately from the site content (which goes into the database). You can't really configure this, except for telling Drupal where to save cached pages explicitly. You might see improved performance if you use Memcached on cached pages, but the effect is not big enough to warrant the effort.

Drupal stores many of its internal data and structures in efficient ways to improve frequent access when application-level caching. This isn’t information that a site visitor will see per se, but it is critical for constructing any page. Basically, the only enhancements that can be made at this level are improving where this cached information is stored, like using Memcached instead of the database.

You just need to install Drupal and let the software take care of caching at the application-level.

Component-level caching

This works on user-facing components such as blocks, panels, and views. For example, you might have a website with constantly changing content but a single block remains the same. In fact, you may have the same block spread across dozens of pages. Caching it can result in big performance improvements.

Component-level caching is usually disabled by default, though you can turn it on with some simple configuration changes. For the best results, identify blocks, panels, and views that remain the same across your site, and then cache them aggressively. You will see strong speedups for authenticated users.

Page-level caching

This is exactly what it sounds like: The entire page is cached, stored and delivered to a user. This is the most efficient type of caching. Instead of generating pages dynamically with Drupal bootstrap, your server can show static HTML pages to users instead. Site performance will improve almost immeasurably.

Page-level caching gives you a lot of room to customize. You can use any number of caching servers, including Varnish, which we use at Acquia Cloud. You can also use CDNs like Akamai, Fastly, or CloudFlare to deliver cached pages from servers close to the user's location. With CDNs, you are literally bringing your site closer to your users.

Keep in mind that forced, page-level caching works only for anonymous users by default. Fortunately, this forms the bulk of traffic to any website.

It bears repeating: Caching should be your top priority for boosting Drupal performance. By identifying and caching commonly repeated components and using a CDN at page-level, you’ll see site speed improvements that you can write home about.

Next time: How to Evaluate Drupal Modules for Performance Optimization.

Tags:  acquia drupal planet
Categories: FLOSS Project Planets

Acquia: Seamless Migration to Drupal 8: Make it Yours

Planet Drupal - Tue, 2015-06-30 11:26

Hi there. I’m Adam from Acquia. And I want YOU to adopt Drupal 8!

I’ve been working on this for months. Last year, as an Acquia intern, I wrote the Drupal Module Upgrader to help people upgrade their code from Drupal 7 (D7) to Drupal 8 (D8). And now, again as an Acquia intern, I’m working to provide Drupal core with a robust migration path for your content and configuration from D6 and D7 to Drupal 8. I’m a full-service intern!

The good news is that Drupal core already includes the migration path from D6 to D8. The bad news is that the (arguably more important) migration path from D7 to D8 is quite incomplete, and Drupal 8 inches closer with each passing day. That’s why I want -- nay, need -- your help.

We need to get this upgrade path done.

If you want core commits with your name on them (and why wouldn’t you?), this a great way to get some, regardless of your experience level. Noob, greybeard, or somewhere in between, there is a way for you to help. (Besides, the greybeards are busy fixing critical issues.)

What’s this about?

Have you ever tried to make major changes to a Drupal site using update.php and a few update_N hooks? If you haven’t, consider yourself lucky; it’s a rapid descent into hell. Update hooks are hard to test, and any number of things can go wrong while running them. They’re not adaptable or flexible. There’s no configurability -- you just run update.php and hope for the best. And if you’ve got an enormous site with hundreds of thousands of nodes or users, you’ll be staring anxiously at that progress bar all night. So if the idea of upgrading an entire Drupal site in a single function terrifies you, congratulations: you’re sane.

No, when it comes to upgrading a full Drupal site, hook_update_N() is the wrong tool for the job. It’s only meant for making relatively minor modifications to the database. Greater complexity demands something a lot more powerful.

The Migrate API is that something. This well-known contrib module has everything you need to perform complex migrations. It can migrate content from virtually anything (WordPress, XML, CSV, or even a Drupal site) into Drupal. It’s flexible. It’s extensible. And it’s in Drupal 8 core. Okay, not quite -- the API layer has been ported into core, but the UI and extras provided by the Drupal 7 Migrate module are in a (currently sandboxed) contrib module called Migrate Plus.

Also in core is a new module called Migrate Drupal, which uses the Migrate API to provide upgrade paths from Drupal 6 and 7. This is the module that new Drupal 8 users will use to move their old content and configuration into Drupal 8.

At the time of this writing, Migrate Drupal contains a migration path for Drupal 6 to Drupal 8, and it’s robust and solid thanks to the hard work of many contributors. It was built before the Drupal 7 migration path because Drupal 6 security support will be dropped not long after Drupal 8 is released. It covers just about all bases -- it migrates your content into Drupal 8, along with your CCK fields (and their values). It also migrates your site’s configuration into Drupal 8, right down to configuration variables, field widget and formatter settings, and many other useful tidbits that together comprise a complete Drupal 6 site.

Here’s a (rather old) demo video by @benjy, one of the main developers of the Drupal 6 migration path:

Awesome, yes? I think so. Which brings me to what Migrate Drupal doesn’t yet have -- a complete upgrade path from Drupal 7 to Drupal 8. We’re absolutely going to need one. It’s critical if we’re going to get people onto Drupal 8!

This is where you come in. The Drupal 7 migration path is one of the best places to contribute to Drupal core, even at this late stage of the game. The D7 upgrade path has been mapped out in a meta-issue on drupal.org, and a large chunk of it is appropriate for novice contributors!

Working on the Migrate API involves writing migrations, which are YAML files (if you’re not familiar with YAML, the smart money says that you will pick it up in, honestly, thirty seconds flat). You’ll also write automated tests, and maybe a plugin or two -- a crucial skill when it comes to programming Drupal 8! If you’re a developer, contributing migrations is a gentle, very useful way to prepare for D8.

A very, very quick overview of how this works

Migrations are a lot simpler than they look. A migration is a piece of configuration, like a View or a site slogan. It lives in a YAML file.

Migrations have three parts: the source plugin, the processing pipeline, and the destination plugin. The source plugin is responsible for reading rows from some source, like a Drupal 7 database or a CSV file. The processing pipeline defines how each field in each row will be massaged, tweaked, and transformed into a value that is appropriate for the destination. Then the destination plugin takes the processed row and saves it somewhere -- for example, as a node or a user.

There’s more to it, of course, but that’s the gist. All migrations follow this source-process-destination flow.

id: d6_url_alias label: Drupal 6 URL aliases migration_tags: - Drupal 6 # The source plugin is an object which will read the Drupal 6 # database directly and return an iterator over the rows of the # {url_alias} table. source: plugin: d6_url_alias # Define how each field in the source row is mapped into the destination. # Each field can go through a “pipeline”, which is just a chain of plugins # that transform the original value into the destination value, one step at # a time. Source values can go through any number of transformations # before being added to the destination row. In this case, there are no # transformations -- it's just direct mapping. process: source: src alias: dst langcode: language # The destination row will be saved by the url_alias destination plugin, which # knows how to create URL aliases. There are many other destination plugins, # including ones to create content entities (nodes, users, terms, etc.) and # configuration (fields, display settings, etc.) destination: plugin: url_alias # Migrations can depend on specific modules, configuration entities, or even # other migrations. dependencies: module: - migrate_drupal I <3 this, how can I help?

The first thing to look at is the Drupal 7 meta-issue. It divvies up the Drupal 7 upgrade path by module, and divides them further by priority. The low-priority ones are reasonably easy, so if you’re new, you should grab one of those and start hacking on it. (Hint: migrating variables to configuration is the easiest kind of migration to write, and there are plenty of examples.) The core Migrate API is well-documented too.

If you need help, we’ve got a dedicated IRC channel (#drupal-migrate). I’m phenaproxima, and I’m one of several nice people who will be happy to help you with any questions you’ve got.

If you’re not a developer, you can still contribute. Do you have a Drupal 6 site? Migrate it to Drupal 8, and see what happens! Then tell us how it went, and include any unexpected weirdness so we can bust bugs. As the Drupal 7 upgrade path shapes up, you can do the same thing on your Drupal 7 site.

If you want to learn about meatier, more complicated issues, the core Migrate team meets every week in a Google Hangout-on-air, to talk about larger problems and overarching goals. But if you’d rather focus on simpler things, don’t worry about it. :)

And with that, my fellow Drupalist(a)s, I invite you to step up to the plate. Drupal 8 is an amazing release, and everyone deserves it. Let’s make its adoption widespread. Upgrading has always been one of the major barriers to adopting a new version of Drupal, but the door is open for us to fix that for good. I know you can help.

Besides, core commits look really good with your name tattooed on ‘em. Join us!

Tags:  acquia drupal planet
Categories: FLOSS Project Planets

Global shortcut handling in a Plasma Wayland session

Planet KDE - Tue, 2015-06-30 11:02

KDE Frameworks contain a framework called KGlobalAccel. This framework allows applications to register key bindings (e.g. Alt+Tab) for actions. When the key binding is triggered the action gets invoked. Internally this framework uses a DBus interface to communicate with a daemon (kglobalaccel5) to register the key bindings and for getting notified when the action triggered.

On X11 the daemon uses the X11 core functionality to get notified whenever key events it is interested in happen. Basically it is a global key logger. Such an architecture has the disadvantage that any process could have this infrastructure and it would be possible for multiple processes grabbing the same global shortcut. In such a case undefined behavior is triggered as either multiple actions are triggered at the same time or only one action is triggered while the others do not get informed at all.

In addition the X11 protocol and the X server do not know that kglobalaccel5 is a shortcut daemon. It doesn’t know that for example the shortcut to lock the screen must be forwarded even if there is an open context menu which grabbed the keyboard.

In Wayland the security around input handling got fixed. A global key logger is no longer possible. So our kglobalaccel5 just doesn’t get any input events (sad, sad kglobalaccel5 cannot do anything) and even when started on Xwayland with the xcb plugin it’s pretty much broken. Only if key events are sent to another Xwayland client it will be able to intercept the events.

This means a global shortcut handling needs support from the compositor. Now it doesn’t make much sense to keep the architecture with a separate daemon process as that would introduce a possible security vulnerability: it would mean that there is a way how to log the keys. One only needs to become the global shortcuts daemon and there you go. Also we don’t want to introduce a round trip to another application to decide where to deliver the key event to.

Therefore the only logical place is to integrate global shortcut handling directly into KWin. Now this is a little bit tricky. First of all kglobalaccel5 gets DBus activated when the first application tries to access the DBus interface. And of course KWin itself is using the DBus interface. So KWin starts up and has launched the useless kglobalaccel5. Which means one of our tasks is to prevent kglobalaccel5 from starting.

Of course we do not want to duplicate all the work which was done in kglobalaccel. We want to make use of as much work as possible. Because of that kglobalaccel5 got a little surgery and the platform specific parts got split out into loadable runtime plugins depending on the QGuiApplication::platformName(). This allows KWin to provide a plugin to perform the “platform specific” parts. But the plugin would still be loaded as part of kglobalaccel5 and not as part of KWin. So another change was to turn the functionality of kglobalaccel into a library and make the binary just a small wrapper around the library. This allows KWin to link the library and start kglobalaccel from within the KWin process and feed in its own plugin.

Starting the linked KGlobalAccel is one of the first things KWin needs to do during startup. It’s essential that KWin takes over the DBus interface before any process tries to access it (as a good part it’s done so early that the Wayland sockets do not accept connections yet and Xwayland is not even started). We will also try to make kglobalaccel5 a little bit more robust about it to not launch at all in a Plasma/Wayland session.

Now the reader might think: wait, that still gives me the possibility to install a stealth key logger, I just need to create shortcuts for all keys. Nope, doesn’t work. As key events get filtered out a user would pretty quickly notice that something is broken.

Integrating KGlobalAccel into KWin on Wayland brings an obvious disadvantage: it’s linked to KWin. If one wants to use applications using KGlobalAccel on other compositors some additional work might be needed to use their local global shortcut system – if there is some. For most applications this is no problem, though, as they are part of the Plasma workspace. Also for other global shortcut systems to work with KWin it’s needed to port them to use KGlobalAccel internally when running in a Plasma/Wayland session (that’s also a good idea for X11 sessions as KGlobalAccel can provide additional features like checking whether the key is already taken by another process).

Categories: FLOSS Project Planets

FSF Blogs: May 2015 - Brest, Athens, Heraklion, and Chania

GNU Planet! - Tue, 2015-06-30 11:00

RMS gave his speech "Logiciels Libres & Éducation" at the Université de Bretagne Occidentale's Guilcher amphitheatre in Brest, France, on May 12th, 2015, twenty years after his first visit to the city, to an audience of over five hundred people.

(Photos under CC BY-SA 3.0 and courtesy of Romain Heller.)

He was also in Greece later in May, to speak:

…at CommonsFest in Athens on May 16th,

(Photos under CC BY-SA 3.0 and courtesy of dkoukoul.)

…at the Lecture Hall of Natural History Museum of Crete, in Heraklion, on May 22nd,

(Photos under CC BY-SA 3.0 and courtesy of dkoukoul.)

…and at the Technical University of Crete (speech available in Ogg Vorbis and WebM formats), in Chania, on May 27th.

(Photos under CC BY-SA 3.0 and courtesy of the Technical University of Crete.)

Please fill out our contact form, so that we can inform you about future events in and around Brest, Athens, and Heraklion and Chania. Please see www.fsf.org/events for a full list of all of RMS's confirmed engagements and contact rms-assist@gnu.org if you'd like him to come speak.

Thank you to everyone who helped make this tour a success!

Categories: FLOSS Project Planets

May 2015 - Brest, Athens, Heraklion, and Chania

FSF Blogs - Tue, 2015-06-30 11:00

RMS gave his speech "Logiciels Libres & Éducation" at the Université de Bretagne Occidentale's Guilcher amphitheatre in Brest, France, on May 12th, 2015, twenty years after his first visit to the city, to an audience of over five hundred people.

(Photos under CC BY-SA 3.0 and courtesy of Romain Heller.)

He was also in Greece later in May, to speak:

…at CommonsFest in Athens on May 16th,

(Photos under CC BY-SA 3.0 and courtesy of dkoukoul.)

…at the Lecture Hall of Natural History Museum of Crete, in Heraklion, on May 22nd,

(Photos under CC BY-SA 3.0 and courtesy of dkoukoul.)

…and at the Technical University of Crete (speech available in Ogg Vorbis and WebM formats), in Chania, on May 27th.

(Photos under CC BY-SA 3.0 and courtesy of the Technical University of Crete.)

Please fill out our contact form, so that we can inform you about future events in and around Brest, Athens, and Heraklion and Chania. Please see www.fsf.org/events for a full list of all of RMS's confirmed engagements and contact rms-assist@gnu.org if you'd like him to come speak.

Thank you to everyone who helped make this tour a success!

Categories: FLOSS Project Planets

GSoC Midterm Update

Planet KDE - Tue, 2015-06-30 10:36

So Google Summer of Code midterms are here. I want to thank my mentor, Jasem, for helping me out. I am now able to successfully display one constellation image, that is the Andromeda constellation. Currently, the image is displayed, but a lot of work needs to be done on positioning the image, and rotating the image on the sky map. Here is a brief summary of the changes I made.

I implemented an abstract function in SkyPainter called virtual bool drawConstellationArtImage(ConstellationsArt *obj, bool drawConstellationImage) and then implemented it as an empty function in SkyGLPainter. The function is implemented in SkyQPainter. Now, in skymapcomposite.cpp, the class ConstellationArtComponent is added via the addComponent() method. And the draw function is called like m_ConstellationArt->draw(skyp). This draw function then calls the drawConstellationArtImage() function I described above. Lastly I have edited data/CMakeLists.txt to include skycultures.sqlite, and all the constellation images. Presently the file skycultures.sqlite includes only one record, that is for the Andromeda constellation. Here is a screen shot of the same. Here’s the plan for the next few days. Get the button to toggle constellation art off/on working, and position/rotate/scale the image appropriately for Andromeda. Once that is done, make all constellations appear in the sky. Here there would be 85 of them, instead of 88, because I have the image file for Argonavis, which was later on divided into Carina, Puppis and Vela. More to come soon!
Categories: FLOSS Project Planets

Omaha Python Users Group: July 15 Meeting Details

Planet Python - Tue, 2015-06-30 10:19

Topic/Speaker – “Integrating Python Into Other Code Types” / Adam Shaver
It will cover direct C/C++ integration, use of boost for integration, and use of an Enterprise Service Bus (ESB) – Zato.io – to integrate python into the work flow.

Location – Alley Poyner Macchietto Architecture Office in the Tip Top Building at 1516 Cuming Street.

Meeting starts at 6:30 pm, Wednesday, 7/15/2015

Categories: FLOSS Project Planets

Drupal Watchdog: Testing 1.2.3...

Planet Drupal - Tue, 2015-06-30 10:15
Column

The introduction of Behat 3, and the subsequent release of the Behat Drupal Extension 3, opened up several new features with regards to testing Drupal sites. The concept of test suites, combined with the fact that all contexts are now treated equally, means that a site can have different suites of tests that focus on specific areas of need.

Background

Behat is a PHP framework for implementing Behavior Driven Development (BDD). The aim is to use ubiquitous language to describe value for everybody involved, from the stake-holders to the developers. A quick example:

In order to encourage visitors to become more engaged in the forums Visitors who choose to post a topic or comment Will earn a 'Communicator' badge

This is a Behat feature, there need be no magic or structure to this. The goal is to simply and concisely describe a feature of the site that provides true value. In Behat, features are backed up with scenarios. Scenarios are written in Gherkin and are mapped directly to step-definitions which execute against a site, and determine if, indeed, a given scenario is working.

Continuing with the above example:

Scenario: A user posts a comment to an existing topic and earns the communicator badge Given a user is viewing a forum topic "Getting started with Behat" When they post a comment They should immediately see the "Communicator" badge

Each of the Given, When, and Then steps are mapped to code using either regex, or newly in Behat 3, Turnip syntax:

Categories: FLOSS Project Planets

Interview with Livio Fania

Planet KDE - Tue, 2015-06-30 10:02
Could you tell us something about yourself?

I’m Livio Fania. I’m Italian Illustrator living in France.

Do you paint professionally, as a hobby artist, or both?

I paint professionally.

What genre(s) do you work in?

I make illustrations for press, posters and children books. My universe is made by geometrical shapes, stylized characters and flashy colors.

Whose work inspires you most — who are your role models as an artist?

I like the work of João Fazenda, Riccardo Guasco and Nick Iluzada among many others.

What makes you choose digital over traditional painting?

I did not take a definite choice. Even if I work mainly digitally, I still have a lot of fun using traditional tools such as colored pencils, brush pens and watercolors. Besides, in the 90% of cases I draw by hand, I scan, and just at the end of the process I grab my graphic tablet stylus.

I do not think that working digitally means to be faster. On the contrary, I can work more quickly by hand, especially in the first sketching phases. What digital art allows is CONTROL all over the process. If you keep your layer stack well organized, you can always edit your art without losing the original version, and this is very useful when your client asks for changes. If you work with traditional tools and you drop your ink in the wrong place, you can’t press Ctrl+z.

How did you find out about Krita?

I discovered Krita through a video conference posted on David Revoy’s blog. Even if I don’t particularly like his universe, I think he is probably the most influential artist using FLOSS tools, and I’m very grateful to him for sharing his knowledge with the community. Previously, I used to work with MyPaint, mainly for its minimalist interface which was perfect for the small laptop I had. Then I discovered that Krita was more versatile and better developed, so I took some time to learn it and now I could not do without it.

What was your first impression?

At first I thought it was not the right tool for me. Actually, most digital artists use Krita for its painting features, like blending modes and textured brushes, which allow to obtain realistic light effects. Personally, I think that realism can be very boring and that is why I paint in a stylized way with uniform tints. Besides, I like to bound my range of possibilities in a set of limited elements: palettes of 5-8 colors and 2-3 brushes. So at the beginning I felt like Krita had too many options for me. But little by little I adapted the GUI to my workflow. Now I really think everybody can find their own way to use Krita, no matter the painting style they use.

What do you love about Krita?

Two elements I really love:
1) The favourite presets docker which pops up with right click. It contains everything you need to keep painting and it is a pleasure to control everything with a glance.
2) The Composition tab, which allows to completely change the color palette or experiment with new effects without losing the original version of a drawing.

What do you think needs improvement in Krita? Is there anything that really annoys you?

I think that selections are not intuitive at all and could be improved. When dealing with complex selections, it is time-consuming to check the selecting mode in the option tab (replace, intersect, subtract) and proceed accordingly. Especially considering that by default the selecting mode is the same you had when you used the tool last time (but in the meantime you probably forgot it). I think it would be much better if every time a selection tool is taken, it would be be in “normal” mode by default, and then one can switch to a different modes by pressing Ctrl/Shift.

What sets Krita apart from the other tools that you use?

Krita is by far the most complete digital painting tool developed on Linux. It is widely customizable (interface, workspaces, shortcuts, tabs) and it offers a very powerful brush engine, even compared to proprietary applications. Also, a very important aspect is the that the Krita foundation has a solid organization and develops it in a continuous way thanks to donations, Kickstarter campaigns etcetera. This is particularly important in the open source community, where we have sometimes well designed projects which disappear because they are not supported properly.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?


The musicians in the field.

What techniques and brushes did you use in it?

As i said, I like to have limited presets. In this illustration I mostly used the “pastel_texture_thin” brush which is part of the default set of brushes in Krita. I love its texture and the fact that it is pressure sensitive. Also, I applied a global bitmap texture on an overlay layer.

Where can people see more of your work?

www.liviofania.com
https://www.facebook.com/livio.fania.art

Anything else you’d like to share?

Yes, I would like to add that I also release all my illustrations under a Creative Commons license, so you can Download my portfolio, copy it and use it for non-commercial purposes.

Categories: FLOSS Project Planets

Midterm update

Planet KDE - Tue, 2015-06-30 09:34

As we reached the midway of our journey, i think some updates are in order. All i can say is that i had a really good time this last month. Coding and watching the project grow is just awesome. But enough talking, lets get to the interesting part. During this month, with the assistance of my mentor (big thanks here to Jasem) , I designed a GUI for the Scheduler and i implemented the most simple scenario for an observation schedule (I will explain this in a minute). This was done for the purpose of testing, with the intent of further use, of the DBus calls functionallity. As a possible stand alone program, the Scheduler must be as independent as possible. Here you can see how the GUI turned out:

And now, back to the current scheduler logic. I implemented the functionallity for the “Now” scenario. Basically, after a user selects an object, he can specifiy that the observation should start right now by checking the “Now” checkbox. After the scheduler starts, it begins to make DBus calls through the ekos interface like slewing the telescope, loading the sequence file and starting the sequence. The next order of business will be to figure out an algorithm which can determine which is the best object that should be prioritised. Adding the implementation of the “Specific time” functionallity to this algorithm, will make the basic scheduler logic that needs to be implemented. This is it for now. I will return with further updates. Stay tuned :D


Categories: FLOSS Project Planets

GSoC: [Kdenlive] Animated Keyframe widget

Planet KDE - Tue, 2015-06-30 08:15

After Dan Dennedy implemented Mlt::Animation API for use, I've made a separate widget for the new Animation Keyframes.

Now I've tested this for Volume Effect in Kdenlive with 'level' property set to the value of "0=0.5;100|=1;200~=0.5", where (|) represents a discrete keyframes, (=) represents linear interpolated keyframe and (~) represents smooth spline keyframes.

We have got a animated-keyframe-widget, where we can see the keyframes, add or remove them and even edit the values of existing ones.

The type of keyframe editing is in progress as well after that it will be display these keyframes on the clip on timeline. And to be able to edit them directly from the track, atleast the positions.

Tag:
Categories: FLOSS Project Planets

Colm O hEigeartaigh: An STS JAAS LoginModule for Apache CXF

Planet Apache - Tue, 2015-06-30 07:27
Last year I blogged about how to use JAAS with Apache CXF, and the different LoginModules that were available. Recently, I wrote another article about using a JDBC LoginModule with CXF. This article will cover a relatively new JAAS LoginModule  added to CXF for the 3.0.3 release. It allows a service to dispatch a Username and Password to a STS (Security Token Service) instance for authentication via the WS-Trust protocol, and also to retrieve the user's roles by extracting them from a SAML token returned by the STS.

1) The STS JAAS LoginModule

The new STS JAAS LoginModule is available in the CXF WS-Security runtime module. It takes a Username and Password from the Callbackhandler passed to the LoginModule, and uses them to create a WS-Security UsernameToken structure. What happens then depends on a configuration setting in the LoginModule.

If the "require.roles" property is set, then the UsernameToken is added to a WS-Trust "Issue" request to the STS, and a "TokenType" attribute is sent in the request (defaults to the standard "SAML2" URI, but can be configured). The client also adds a WS-Trust "Claim" to the request that tells the STS to add the role of the authenticated end user to the request. How the token is added to the WS-Trust request depends on whether the "disable.on.behalf.of" property is set or not. By default, the token is added as an "OnBehalfOf" token in the WS-Trust request. However, if "disable.on.behalf.of" is set to "true", then the credentials are used according to the WS-SecurityPolicy of the STS endpoint. For example, if the policy requires a UsernameToken, then the credentials are added to the security header of the WS-Trust request. If the "require.roles" property is not set, the the UsernameToken is added to a WS-Trust "Validate" request.

The STS validates the received UsernameToken credentials supplied by the end user, and then either creates a token (if the Issue binding was used), or just returns a simple response telling the client whether the validation was successful or not. In the former use-case, the token that is returned is cached meaning that the end user does not have to re-authenticate until the token expires from the cache.

The LoginModule has the following configuration properties:
  • require.roles - If this is defined, then the WS-Trust Issue binding is used, passing the value specified for the "token.type" property as the TokenType, and the "key.type" property for the KeyType. It also adds a Claim to the request for the default "role" URI.
  • disable.on.behalf.of - Whether to disable passing Username + Password credentials via "OnBehalfOf".
  • disable.caching - Whether to disable caching of validated credentials. Default is "false". Only applies when "require.roles" is defined.
  • wsdl.location - The location of the WSDL of the STS
  • service.name - The service QName of the STS
  • endpoint.name - The endpoint QName of the STS
  • key.size - The key size to use (if requesting a SymmetricKey KeyType). Defaults to 256.
  • key.type - The KeyType to use. Defaults to the standard "Bearer" URI.
  • token.type - The TokenType to use. Defaults to the standard "SAML2" URI.
  • ws.trust.namespace - The WS-Trust namespace to use. Defaults to the standard WS-Trust 1.3 namespace.
In addition, any of the standard CXF security configuration tags that start with "ws-security." can be used as documented here. Sometimes it is necessary to set some security configuration depending on the security policy of the WSDL.

Here is an example of the new JAAS LoginModule configuration:



2) A testcase for the new LoginModule

Using an STS via WS-Trust for authentication and authorization can be quite difficult to set up and understand, but the new LoginModule makes it easy. I created a testcase + uploaded it to github:
  • cxf-jaxrs-jaas-sts: This project demonstrates how to use the new STS JAAS LoginModule in CXF to authenticate and authorize a user. It contains a "double-it" module which contains a "double-it" JAX-RS service. It is secured with JAAS at the container level, and requires a role of "boss" to access the service. The "sts" module contains a Apache CXF STS web application which can authenticate users and issue SAML tokens with embedded roles.
To run the test, download Apache Tomcat and do "mvn clean install" in the testcase above. Then copy both wars and the jaas configuration file to the Apache Tomcat install (${catalina.home}):
  • cp double-it/target/cxf-double-it.war ${catalina.home}/webapps
  • cp sts/target/cxf-sts.war ${catalina.home}/webapps
  • cp double-it/src/main/resources/jaas.conf ${catalina.home}/conf
Next set the following system property:
  • export JAVA_OPTS=-Djava.security.auth.login.config=${catalina.home}/conf/jaas.conf
Finally, start Tomcat, open a web browser and navigate to:

http://localhost:8080/cxf-double-it/doubleit/services/100

Use credentials "alice/security" when prompted. The STS JAAS LoginModule takes the username and password, and dispatches them to the STS for validation.

    Categories: FLOSS Project Planets

    Europython: EuroPython 2015: Call for On-site Volunteers

    Planet Python - Tue, 2015-06-30 07:02

    EuroPython is organized and run by volunteers from the Python community, but we’re only a few and we will need more help to make the conference run smoothly.

    We need your help !

    We will need help with the conference and registration desk, giving out the swag bags and t-shirts, session chairing, entrance control, set up and tear down, etc.

    Perks for Volunteers

    In addition to endless fame and glory as official EuroPython Volunteer, we have also added some real-life few perks for you:

    • We will grant each volunteer a compensation of EUR 22 per shift
    • Volunteers will be eligible for student house rooms we have available and can use their compensation to pay for these
    • Get an awesome EuroPython Volunteer T-Shirt that you can keep and show off to your friends :-)
    Register as Volunteer

    Please see our EuroPython Volunteers page for details and the registration form:

    If you have questions, please write to our helpdesk@europython.eu.

    Hope to see you in Bilbao :-)

    Enjoy,

    EuroPython 2015 Team

    Categories: FLOSS Project Planets

    Amazee Labs: Debug Solr queries

    Planet Drupal - Tue, 2015-06-30 07:00
    Debug Solr queries Vasi Chindris Tue, 06/30/2015 - 13:00

    Solr is great! When you have a site even with not so much content and you want to have a full text search, then using Solr as a search engine will improve a lot the speed of the search itself and the accuracy of the results. But, as most of the times happen, all the good things also come with a drawback too. In this case, we talk about a new system which our web application will communicate to. This means that, even if the system is pretty good by default, you have to be able in some cases to understand more deeply how the system works.This means that, besides being able to configure the system, you have to know how you can debug it. We'll see in the following how we can debug the Solr queries which our applications use for searching, but first let’s think of a concrete example when we need to debug a query.

    An example use case

    Let’s suppose we have 2 items which both contain in the title a specific word (let’s say ‘building’). And we have a list where we show search results ordered by their score first, and when they have equal scores by the creation date, desceding. At a first sight, you would say that, because both of them have the word in the title, they have the same score, so you should see the newest item first. Well, it could be that this is not true, and even if they have the word in the title, the scores are not the same.

    Preliminaries

    Let’s suppose we have a system which uses Solr as a search server. In order to be able to debug a query, we first have to be able to run it directly on Solr. The easiest is when Solr is accessible via http from your browser. If not, the Solr must be reached from the same server where your application sits, so you call it from there. I will not insist on this thing, if you managed to get the Solr running for you application you should be able to call it.

    Getting your results

    The next thing you do is to try to make a query with the exact same parameters as your application is doing. To have a concrete example, we will consider here that we have a Drupal site which uses the Search API module with the Apache Solr as the search server. One of the possibilities to get the exact query which is made is to check the SearchApiSolrConnection::makeHttpRequest() method which makes a call to drupal_http_request() using an URL. You could also use the Solr logs to check the query if it is easier. Let's say we search for the word “building”. An example query should look like this:

    http://localhost:8983/solr/select?fl=item_id%2Cscore&qf=tm_body%24value%5E5.0&qf=tm_title%5E13.0&fq=index_id%3A%22articles%22&fq=hash%3Ao47rod&start=0&rows=10&sort=score%20desc%2C%20ds_created%20desc&wt=json&json.nl=map&q=%22building%22

    If you take that one and run it in the browser, you should see a JSON output with the results, something like:

    To make it look nicer, you can just remove the “wt=json” (and optionally “json.nl=map”) from your URL, so it becomes something like:

    http://localhost:8983/solr/select?fl=item_id%2Cscore&qf=tm_body%24value^5.0&qf=tm_title^13.0&fq=index_id%3A"articles"&fq=hash%3Ao47rod&start=0&rows=10&sort=score desc%2C ds_created desc&q="building"

    which should result in a much nicer, xml output:

    List some additional fields

    So now we have the results from Solr, but all they are containing are the internal item id and the score. Let's add some fields which will help us to see exactly what texts do the items contain. The fields you are probably more interested in are the ones which are in the “qf” variable, in your URL. In this case we have:

    qf=tm_body%24value^5.0&qf=tm_title^13.0

    which means we are probably interested in the “tm_body%24value” and the “ tm_title” fields. To make them appear in the results, we add them to the “fl” variable, so the URL becomes something like:

    http://localhost:8983/solr/select?fl=item_id%2Cscore%2Ctm_body%24value%2Ctm_title&qf=tm_body%24value^5.0&qf=tm_title^13.0&fq=index_id%3A%22articles%22&fq=hash%3Ao47rod&start=0&rows=10&sort=score%20desc%2C%20ds_created%20desc&q=%22building%22

    And the result should look something like:

    Debug the query

    Now everything is ready for the final step in getting the debug information: adding the debug flag. It is very easy to do that, all you have to do is to add the “debugQuery=true” to your URL, which means it will look like this:

    http://localhost:8983/solr/select?fl=item_id%2Cscore%2Ctm_body%24value%2Ctm_title&qf=tm_body%24value^5.0&qf=tm_title^13.0&fq=index_id%3A%22articles%22&fq=hash%3Ao47rod&start=0&rows=10&sort=score%20desc%2C%20ds_created%20desc&q=%22building%22&debugQuery=true

    You should see now more debug information, like how the query is parsed, how much time does it take to run, and probably the most important one, how the score of each result is computed. If your browser does not display the formula in an easy-readable way, you can copy and paste it into a text editor, it should look something like:

    As you can see, computing the score of an item is done using a pretty complex formula, with many variables as inputs. A few more details about these variables you can find here: Solr Search Relevancy

    Further reading and useful links

    Categories: FLOSS Project Planets

    "Menno's Musings": IMAPClient 0.13

    Planet Python - Tue, 2015-06-30 06:38

    I'm chuffed to announce that IMAPClient 0.13 is out!

    Here's what's new:

    • Added support for the ID command (as per RFC2971). Many thanks to Eben Freeman from Nylas.
    • Fixed exception with NIL address in envelope address list. Thomas Steinacher gets a big thank you for this one.
    • Fixed a regression in the handling of NIL/None SEARCH responses. Thanks again to Thomas Steinacher.
    • Don't traceback when an unparseable date is seen in ENVELOPE or INTERNALDATE responses. None is now returned instead.
    • Extended timestamp parsing support to allow for quirky timestamp strings which use dots for the time separator.
    • Replaced the horrible INTERNALDATE parsing code
    • The datetime_to_imap top-level function has been moved to the datetime_util module and is now called datetime_to_INTERNALDATE. This will only affect you in the unlikely case that you were importing this function out of the IMAPClient package.
    • The docs for various IMAPClient methods, and the HACKING.rst file have been improved.
    • CONDSTORE live test is now more reliable (especially when running against Gmail)

    See the NEWS.rst file and manual for more details.

    IMAPClient can be installed from PyPI (pip install imapclient) or downloaded from the IMAPClient site.

    I'm also excited to announce that Nylas (formerly Inbox) has now employed me to work on IMAPClient part time. There should be a significant uptick in the development of IMAPClient.

    The next major version of IMAPClient will be 1.0.0, and will be primarily focussed on enhancing TLS/SSL support.

    Categories: FLOSS Project Planets

    Nicola Iarocci: Cerberus 0.9 has been released

    Planet Python - Tue, 2015-06-30 05:16
    A few days ago Cerberus 0.9 was released. It includes a bunch of new cool features, let’s browse through some of them. Collection rules First up is the new set of anyof, allof, noneof and oneof validation rules. anyof allows you to list multiple sets of rules to validate against. The field will be considered […]
    Categories: FLOSS Project Planets

    ERPAL: How we’re building our SaaS business with Drupal

    Planet Drupal - Tue, 2015-06-30 05:00

    Have you ever thought about building your own Software-as-a-Service (SaaS) business based on Drupal? I don't mean selling Drupal as a service but selling your Drupal-based software under a subscription model and using Drupal as the basis for your accounting, administration, deployment and the tool that serves and controls all the business processes of your SaaS business. Yes, you have? That's great! We’ve done the same thing over the last 12 months, and in this blog post I want to share my experiences with you (and we’d be delighted if you shared your experiences in the comments). I’ll show you the components we used to build Drop Guard – a Drupal-auto-updater-as-a-service (DAUaaS ;-)) that includes content delivery and administration, subscription handling, CRM and accounting, all based on ERPAL Platform.

    I’m not talking about a full-featured, mature SaaS business yet, but about a start-up in which expense control matters a lot and where agility is one of the most important parameters for driving growth. Of course, there are many services out there for CRM, payment, content, mailings, accounting, etc. But have you added up all the expenses for those individual services, as well as the time and money you need to integrate them properly? And are you sure you’ve made a solid choice for the future? I want to show you how Drupal, as a highly flexible open source application framework, brings (almost) all those features, saves you money in the early stages of your SaaS business and keeps you flexible and agile in the future. Below you’ll find a list of the tools we used to build the components of the Drop Guard service.

    Components of a SaaS business application

    Content: This is the page where you present all benefits of your service to potential clients. This page is mostly content-driven and provides a list of plans your customers can subscribe to. There’s nothing special about this as Drupal provides you with all the features right out of the box. The strength of Drupal is that it integrates with all the other features listed below, in one system. With the flexible entity structure of Drupal and the Rules module, you can automate your content and mailings to keep users on board during the trail period and convince them of your service to purchase a full subscription.

    Trial registration: Once your user has signed up using just her email address, she’ll want to start and test using your service for free during the trial period. Drupal provides this registration feature right out of the box. To deploy your application (if you run single instances for every user), you could trigger the deployment with Rules. With the commerce_license module you can create an x-day trial license entity and replace it with the commercial entity once the user has bought and paid for a license.

    Checkout: After the trial period is over, your user needs to either buy the service or quit using it. The process can be just like the checkout process in an online store. This step includes a subscription to a recurring payment provider and the completion of a contact form (to create a complete CRM entry for this subscriber). We used Drupal commerce to build a custom checkout process and commerce products to model the subscription plans. To notify the user about the expiration of her trial period, you can send her one or more emails and encourage her to get in touch. Again, Rules and the flexible entity structure of Drupal work perfectly for this purpose.

    Accounting: Your customer data need to be managed in a CRM as they're one of the most valuable information in your SaaS business. If you’ve just started your SaaS business, you don't need a full-featured and expensive CRM system, but one that scales with your business as it grows and can be extended later with additional features, if needed. The first and only required feature is a list of customers (your subscribers) and a list of their orders and related invoices (paid or unpaid). As we use CRM Core to build the CRM, we can extend the contact entities with fields, build filterable lists with views, reference subscriptions (commerce orders) to contacts and create invoices (a bundle of the commerce order entity pre-configured as the ERPAL invoice module).

    Recurring payment: If you run your SaaS business on a subscription-based model where your clients pay for the service periodically, you have two options to process recurring payments. Handling payments by yourself is not worth trying as it’s too risky, insecure and expensive. So, either you use Stripe to handle recurring payments for you or you can use any payment provider to process one time payments and implement the recurring feature in Drupal. There are some other SaaS payment services worth looking at. We've chosen the second option using Paymill to process payments in combination with commerce_license and commerce_license_billing to implement the recurring feature. For every client with an active subscription, an invoice is created every month and the amount is charged via the payment provider. Then the invoice is set to "paid" and the service continues. The invoice can be downloaded in the portal and is accessible for both the SaaS operator and the client as a dataset and/or a PDF file.

    Deployment: Without going into deep details of application deployment, Docker is a powerful tool for deploying single-instance apps for your clients. You may also want to have a look at different API-based Drupal hosting platforms, such as Platform.sh or Pantheon or Acquia Cloud if you want to sell Drupal-based applications via a SaaS model. They will make your deployment very comfortable and easy to integrate. You can use Drupal multi-site instances or the Drupal access system to separate user-related content (the last one can be very tricky and exert performance impacts on big data!). If your app produces a huge amount of data (entities or nodes) I recommend single instances with Docker or a Drupal hosting platform. As Drop Guard automates deployment and therefore doesn’t produce that much data, we manage all our subscribers in one Drupal instance but keep the decoupled update server horizontally scalable.

    Start building your own SaaS business

    If you’re considering building your own SaaS business, there’s no need to start from scratch. ERPAL Platform is freely available, easy-to-customize and uses Drupal contrib modules such as Commerce, CRM Core and Rules to connect all the components necessary to operate a SaaS business process. With ERPAL Platform you have a tool for developing your SaaS business in an agile way, and you can adapt it to whatever comes in the near future. ERPAL Platform includes all the components for CRM and accounting and integrates nicely with Stripe (and many others, thanks to Drupal Commerce) as well as your (recurring) payment provider. We can modify the default behavior with entities, fields, rules and views to extend the SaaS business platform. We used several contrib modules to extend ERPAL Platform to manage licensed products (commerce license and commerce license billing). If you want more information about the core concepts of ERPAL Platform, there’s a previous blog post about how to build flexible business applications with ERPAL Platform.

    This is how we built Drop Guard, a service for automating Drupal updates with integration into development and deployment workflows. As we’ve just started our SaaS business, we’ll keep you posted with updates along our way to becoming a full-fledged, Drupal-based SaaS business. For instance, we plan to add metrics and marketing automation features to drive traffic. We’ll share our experiences with you here and we’d be happy if you’d share yours in the comments!

    Categories: FLOSS Project Planets

    Cocomore: MySQL - Query optimization

    Planet Drupal - Tue, 2015-06-30 04:02

    Queries are the centerpiece of MySQL and they have high optimization potential (in conjunction with indexes). This is specially true for big databases (whatever big means). Modern PHP frameworks tend to execute dozens of queries. Thus, as a first step, it is required to know what the slow queries are. A built-in solution for that is the MySQL slow query log. This can either be activated in my.cnf or dynamically with the --slow_query_log option. In both cases, long_query_time should be reduced to an appropriate value.

    read more

    Categories: FLOSS Project Planets
    Syndicate content