FLOSS Project Planets

GNUnet News: 6th Dev Mumble - April 27th, 9pm CEST @ gnunet.org

GNU Planet! - Thu, 2015-04-23 13:22

Hi devs,

On the 27th we get to officially announce the results from the GSoC application process to the students, so we should probably use this opportunity to also have a first discussion with those that have been selected. So, let's have the 6th develper Mumble on Monday, April 27th, 9pm CEST, as usual using the Mumble server on gnunet.org. Agenda items include:

  • GSoC announcements and planning
  • GNUnet 0.10.2 release: To CADET or not to CADET?

I hope all GSoC applicants, mentors and Bart can make it, naturally everybody is welcome to join.

Categories: FLOSS Project Planets

Lullabot: Importing huge databases faster

Planet Drupal - Thu, 2015-04-23 12:00

Over the past few months I have been banging my head against a problem at MSNBC: importing the site's extremely large database to my local environment took more than two hours. With a fast internet connection, the database could be downloaded in a matter of minutes, but importing it for testing still took far too long. Ugh!

In this article I'll walk through the troubleshooting process I used to improve things, and the approaches I tried — eventually optimizing the several-hour import to a mere 10-15 minutes.

Categories: FLOSS Project Planets

And, it is official

Planet KDE - Thu, 2015-04-23 11:58

Softpedia was one of the first to break the news, with the headline, “Kubuntu 15.04 Officially Released, Based on Beautiful Plasma 5 Desktop”. They write:

Kubuntu developers don’t usually integrate a custom KDE experience, so Kubuntu is the perfect opportunity to test and see what the developers want to achieve with latest and best of KDE. This is close to what the makers of this great desktop environment are really trying to achieve, so you can easily use Kubuntu as a standard by which any other KDE-powered OS can be judged.

Categories: FLOSS Project Planets

Drupal Watchdog: Building My First Drupal 8 Site

Planet Drupal - Thu, 2015-04-23 11:50
Feature

You’ve heard about it, read about it, and – if you’re like me – dreamed about it. Well, its time to stop dreaming and start doing.

Drupal 8!

If you have experience building sites using Drupal 7, you’ll be pleased to see that from a site building and administration perspective, things are nearly the same.

And if Drupal 8 is your first Drupal experience, you will be pleasantly surprised at how easy it is to build an amazing site.

Installing Drupal

First things first.

You’ll need a basic set of software installed and operational on your laptop, desktop, or server before proceeding with the Drupal 8 installation. Drupal requires that Apache, MySQL, and PHP are installed and working before beginning the installation process. There are several ways to easily install the required software using LAMP (Linux, Apache, MySQL and PHP), WAMP (Windows), or MAMP (Mac) solutions. Grab Google and do a quick search.

Got it?

Good. Now there are five basic steps to install Drupal:

  1. Download the latest version of Drupal 8
  2. Extract the distribution in your Apache
  3. Create a database to hold the content from the site
  4. Create the files directory and settings.php
  5. Run the installation process by visiting your website in a browser

For details on the installation process visit http://wdog.it/4/1/docs.

These are the basic building blocks that will provide the foundation for your Drupal 8 site:

  1. Users
  2. Taxonomy
  3. Content types
  4. Menus
Creating Users

If your site is simple and you’re the only one who will be authoring, editing, and managing content, then the admin account you created during the installation process may be all that you need. In situations where you want to share the content creation and management activities with others, you need to create accounts for those users.

Categories: FLOSS Project Planets

Kubuntu 15.04 – the most beautiful desktop alive

Planet KDE - Thu, 2015-04-23 11:48

We’ve released Kubuntu 15.04, thanks to all who helped.

And thanks to Lucas from the VDG we have a pretty video to introduce the world to the new desktop – Plasma 5.

by
Categories: FLOSS Project Planets

Chiradeep Vittal: How HP Labs nearly invented the cloud

Planet Apache - Thu, 2015-04-23 11:44

On the heels of HP’s news of not-quite abandoning the Cloud, there is coverage of how AWS stole a march on Sun’s plans to provide compute-on-demand. The timeline for AWS starts late 2003 when an internal team in Amazon hatched a plan that among other things could offer virtual servers as a retail offering. Sun’s offering involved bare metal and running jobs, not virtual machines.

In a paper published in 2004 a group of researchers at HP Labs proposed what they called “SoftUDC” – a software-based utility data center. The project involved:

  • API access  to virtual resources
  • Virtualization using the Xen Hypervisor
  • Network virtualization using UDP overlays almost identical to VxLAN
  • Virtual Volumes accessible over the network from any virtual machine (like EBS)
  • “Gatekeeper” software in the hypervisor that provides the software and network virtualization
  • Multi-tier networking using subnetting and edge appliances (“VPC”)
  • Automated OS and application upgrades using the “cattle” technique (just replace instead of upgrade).
  • Control at the edge: firewalls, encryption and QoS guarantees provided at the hypervisor

Many of these ideas now seem “obvious”, but remember this was 2004. Many of these ideas were even implemented. For example, VNET is the name of the network virtualization stack / protocol. This was implemented as a driver in Xen dom0 that would take Ethernet frames exiting the hypervisor and encapsulate them in UDP frames.

Does this mean HP could have been the dominant IAAS player instead of AWS if it only had acted on its Labs innovation? Of course not. But, lets say in 2008 when AWS was a clear danger, it could’ve dug a little deeper inside its own technological inventory to produce a viable competitor early on.  Instead we got OpenStack.

Many of AWS’s core components are based on similar concepts: the Xen hypervisor, network virtualization, virtual volumes, security groups, and so on. No doubt they came up with these concepts on their own — more importantly they implemented them and had a strategy for building a business around it.

Who knows what innovations are cooking today in various big companies, only to get discarded as unviable ideas. This can be framed as the Innovator’s Dilemma as well.


Categories: FLOSS Project Planets

Identities, I usually don’t stop being myself

Planet KDE - Thu, 2015-04-23 11:19

One of the most interesting developments I’ve seen recently inside KDE is KAccounts (or Web Accounts, as it used to be called). It’s not even a KDE project, but a project Nokia started some years ago, I’m guessing that on MeeGo days.

Why do I care?

As a developer it’s always very important to know what resources you have available. The more you have, the richer the users’ experience will be. Resources are not only hardware but often just semantically-structured information. For example, KPeople helps us tell apart the contact list and offers us people instead or Solid, that turns the different ways of exposing the hardware information into something easily consumable by front-end applications.

Now we’re talking about identities. We don’t exclusively rely on local data anymore, I think that’s a fact nowadays (and even 5 years ago). Our disks are just another place where we store some of our data. KDE projects must be prepared to easily offer such workflows. One of the important parts to do it properly is authentication:

  • We need a list of available services we can use.
  • We don’t want our users requested identity authentication all the time.
  • We don’t want each application to implement the N authentications algorithms of the N different providers our users will eventually have.
Why do you care? (as a KDE software user)

First of all, you want your developers to have the best tools (GOTO: “Why do I care?”).

board meeting of @kdecommunity in progress pic.twitter.com/rIVojmuyFB

— Lydia Pintscher (@nightrose) April 12, 2015

There’s a more important issue to figure out, more ideological than technical: We ask service and software providers to use standard specifications to improve interoperability, but then in practice we are seldom leveraging it. The case-by-case set up burden makes it hard for users to have everything configured properly. An example for this is ownCloud integration. I know a couple of people running an instance, but they seldom integrate it in Dolphin, but even if they did they’d still need to integrate the rest of provided services, such as calendar, contacts, etc.
We want to tell our software what providers we have available, those providers will provide different services. The software should integrate as much as possible with those services.

How can I help?
  • Testing: There’s a lot going on, thorough testing and ideas is important. Both when it comes to making sure everything works as expected as well as what software should adopt KAccounts. It will already be available with KDE Applications 14.04.
  • Developing: Make sure your application of choice uses KAccounts to figure out the configured services. Here’s a good place to find documentation: API, and an example use-case.
  • Promotion: It’s important to reach out to service providers, make sure they know there’s such possibility and hopefully they can help us work better with their services (hey! by the book example of a win-win situation!).

Nowadays our computation experience is spread among different services and we need to be smart enough to understand how our users will adopt them and make it as transparent as possible.
AccountsSSO and KAccounts are a solid step forward in this direction, let’s get back in control of our data!

Categories: FLOSS Project Planets

Bryan Pendleton: I got tele-scammed

Planet Apache - Thu, 2015-04-23 10:37

The other day, my mobile rang.

I was in a meeting, so I just sent the call to voicemail.

Later, I listened to the voicemail. A robotic voice droned:

Hello. We have been trying to reach you. This call is officially a final notice from IRS, Internal Revenue Services. The reason of this call is to inform you that IRS is filing lawsuit against you. To get more information about this case file, please call immediately on our department number NNN NNN NNNN.

It was chilling. Lawsuits? The IRS is filing a lawsuit against me?

But something about the call didn't sound right.

Well, actually, MANY things about the call didn't sound right:

  • It was a robot, not a person
  • It didn't greet me by name
  • It was full of awkward, incorrect English ("Internal Revenue Services"?!!)

Something clicked in my brain and I remembered reading something last fall: Scam Phone Calls Continue; IRS Unveils New Video to Warn Taxpayers.

The new Tax Scams video describes some basic tips to help protect taxpayers from tax scams.

These callers may demand money or may say you have a refund due and try to trick you into sharing private information. These con artists can sound convincing when they call. They may know a lot about you, and they usually alter the caller ID to make it look like the IRS is calling. They use fake names and bogus IRS identification badge numbers. If you don’t answer, they often leave an “urgent” callback request.

Yep, that matched, quite well.

A nice article at the South Bend Tribune was helpful, too, as it even included the same fake phone number that had appeared on my phone. Credit 'charge' appears very real

I have repeatedly written about IRS scam telephone calls but I am doing so again as your BBB continues to receive many questions from area residents who are concerned about receiving such calls. Caller IDs are showing all kinds of phone numbers, which pretty much indicates the numbers are being spoofed. Some have reported their Caller ID shows 585-310-3870, 725-422-5697 and 726-597-6584, but the IRS impersonator provides different numbers on the message.

Most recipients are being told “this is your final notice from the IRS” and “a lawsuit is being filed against you for failure to pay taxes.” Some are saying if the taxes are not paid at once, a warrant will be issued for your arrest and the police will be coming after you. Consumers are then told taxes must be paid “immediately.” Instructions are given to wire the money via Western Union or get an advance cash card such as Green Dot MoneyPak from your local drugstore or retailer namely Wal-Mart, Kmart or Target.

I read about lots of scary, annoying stuff, but rarely do I actually get one of these myself.

In a weird way, it was good to get one; it kind of was a tune-up, a practice exam, a drill.

A good reminder that it's a strange world out there, and you should stay on your toes and not fall for the nasty scam.

Oh, and yes: I simply deleted the voicemail (though I did file a complaint on the FTC's website for reporting telescams, and I did re-check that my phone is on the do-not-call list, which it has been for years).

Categories: FLOSS Project Planets

Import Python: Free Community Run Python Job Board

Planet Python - Thu, 2015-04-23 09:40

Unofficial Python Job Board is a 100% free and community run job board. To add a job vacancy/posting simply send a pull request ( yes you read that correctly ) to Python Job Repository

Submitting a Job Vacancy/Posting/Ad

The jobs board is generated automatically from the git repository, and hosted using github pages.

All adverts are held as markdown files, with an added header, under the jobs/ directory. Jobs files should look something like this:

--- title: <Job Advert Title (required)> company: <Your Company (required)> url: <Link to your site/a job spec (optional)> location: <where is the job based? > contract: permanent (or contract/temporary/part-time ..) contact: name: <Your name (required)> email: <Email address applicants should submit to (required)> phone: <Phone number (optional) ...: ... created: !!timestamp '2015-02-20' <- The date the job was submitted tags: - london - python - sql --- Full job description here, in Markdown format

To add your job, submit a pull request to this repo, that adds a single file to the jobs/ directory. This file should match the example above.

When each pull request is submitted, it gets validated, and then manually reviewed, before being added to the site. If the pull request fails the validation testing (Travis) then you must fix this before the pull request can proceed.

Previewing your submission

To preview your submission before creating a Review-request, there are a number of steps to follow:

  1. Install hyde - hyde.github.io <code>pip install hyde</code>
  2. Install fin - <code>pip install fin</code>
  3. clone/checkout the https://github.com/pythonjobs/template repository
  4. Within this clone, put your new file in <code>hyde/content/jobs/[job_filename].html
  5. Delete the contents of the <code>deploy</code> directory.
  6. from within <code>hyde/</code>, run <code>hyde serve</code>
  7. Open a web browser, and navigate to http://localhost:8080/
Categories: FLOSS Project Planets

Import Python: Porting To Python 3 Book Campaign

Planet Python - Thu, 2015-04-23 09:40

TL;DR version of the post: We love Python 3. Porting to Python 3 - Edition 2 is an awesome book. There is a campaign underway to get the book updated and free for all to read / contribute to. Have a look and see if you like to support it.

2014 has seen increased adoption of Python 3

  • Only 32 of top 200 most used Python Packages are not supported by 3.x release.
  • Ubuntu just did away with Python 2 from their latest LTS release. Fedora is next in line,
  • Python 3.x download % on official website is more or less equal to Python 2.x. Thus hinting at increased adoption.

New features / changes in Python 3.x with no backporting to 2.x series. changelog output is compelling reason to adopt Python 3.x.

However Majority of in-production Python software are in 2.x . It's here the relevance of "Porting to Python 3: An in-depth guide" - written by Lennart Regebro comes into picture.

  • Say you want to port the codebase to 3.x Take a look at the Strategies for Porting chapter,
  • Rather then learn by trail trial and error you could read up on common migrations problems while porting.
  • The book also talks about the newer features of Python to improve your codebase as you port.

“Porting to Python 3? is in need of an update since Python 2.5 and 3.2 are in a way obsolete for Majority of Python Developers. Lennart sadly has less time. But he has decided to convert it into a community book, put it up on GitHub and give the Python Software Foundation a license to use it. However to make that a reality he has to clean up the current repository which is in a mess, and thus require efforts.

This campaign, is to get some funding to make it happen. Have a look at the campaign and see if you can contribute. You contribution will result in a Free for all "Porting to Python 3 Book" updated and available on Github.

About the Author

Lennart Regebro has been coding in Python since 1999 ( that's the last century for you ). A member of Python Software Foundation and Pykonik - Krakow Python user group. A look at Lennart Regebro's stackoverflow profile shows us he has answered 1492 Python Questions and many of his top answers are on Python 3.

Categories: FLOSS Project Planets

Import Python: Python Practice Book Review

Planet Python - Thu, 2015-04-23 09:40

Python Practice Book is written by Anand Chitipothu.

Python Practice Book introduces one Python language concept at a time and immediately follows up with a questions or two to test your understanding of the concept.

The Book is compact and uses Programmer friendly style of showing code snippet along with an explanation. It's non verbose style makes it for a quick read.

From Pedagogy perspective the book's explanation equip the reader with requisite knowledge to answer the questions. Those who just learned Python or are in process of learning it should read Python Practice Book.

The book covers the following topics and has coding assignments/questions for each topic

  • Basic data types of Python (int, str, dict, list), how to use it (list/dict comprehensions, common operations) ,
  • Looping constructs and Condition Evaluation ( if, else, True, False, for, while ),
  • Object Oriented Programming with Python (class, inheritance) and modules,
  • Iterators and Generators ( Very well written explaination of Iterators protocol and How to create your own generators ),
  • Functional Programming with Python.

Recently I recommended this Book to an Entrepreneur who incorporated it into their month long Python training for newbies/new joinees. They have nothing but good things to say about the book.

Anand's experience of taking many Python workshops over the years reflects in the quality of the content in the book.

Categories: FLOSS Project Planets

Import Python: Halting Content Recommendation and Ranking

Planet Python - Thu, 2015-04-23 09:40

Gmail marked all emails ( Issue no 15 ) as SPAM. Many subscribers brought it to our notice.
Why ? Our guess is
Massive unique URL generation
To track which subscriber has clicked on which article/project/tweet we made every url unique. We do that so we can recommend and rank the articles for subscribers as per their interest. To track clicks each URL is unique. This results in massive URL generation. Something Gmail SPAM Filter detects and find unusual.

We decided to stop generating recommending content as of now so the emails wouldn't be marked as SPAM. What we would do instead is give each subscriber a preference page and allow them to click on topics that interest them. For e.g. a Flask Programmer would like to see more of Flask content as oppose to Django. So he could click on Flask under web framework category and we would only send him Flask related content.

The fact that gmail make for above 60% of our subscribers forces us to take this step.

Let us know if you have any alternative we could try

Categories: FLOSS Project Planets

Import Python: Conversation with Albert Sweigart - Author of Automate the Boring Stuff with Python

Planet Python - Thu, 2015-04-23 09:40

Al Sweigart is a software developer and teaches programming to kids and adults. He has written several Python books for beginners, including Hacking Secret Ciphers with Python, Invent Your Own Computer Games with Python, and Making Games with Python & Pygame.

Having worked with TechED startups, I have witnessed first-hand the impact of Albert's book Invent with Python on students learning the ropes of programming. We are happy to have him with us for a coversation around Python, Books, Python in Education.

What got you started in the TechED space? Where did the motivation for Invent with Python come from ?

Around 2009 a friend was a nanny for a precocious 10-year-old who wanted to learn how to program. I thought this would be straight-forward web search, but the materials I found didn’t quite meet my needs. There were plenty of (costly) books for professional software engineers and plenty of computer science freshmen materials, but most people don’t kindle a joy in programming by calculating Fibonacci numbers.

So I started writing a tutorial on how to make simple games. I put the game projects in front of the concept explanations, only explaining enough about Python programming to get the reader through that particular game. Then I wrote another game, then another. Eventually this tutorial reached book-length, and some friends suggested I self-publish Invent Your Own Computer Games with Python.

People responded well enough to the book that I wrote Making Games with Python & Pygame for making games with 2D graphics and Hacking Secret Ciphers with Python for making encryption and code-breaking programs. In early 2014 I left my software developer job to write Automate the Boring Stuff with Python full time.

Congratulations on your new book Automate the Boring Stuff with Python. Who is the target audience for this book ?

There have been times when I’m chatting with non-programmer friends who incidentally tell me stories like, “Today at the office I spent three hours opening PDFs, copying a line, pasting it into a spreadsheet, and then moving on to the next PDF. Three hours.” And I used to tell them that they could have written a program to do that for them in fifteen minutes, but that news would sometimes crush them. They can’t believe how much effort they could have saved.

A computer is the primary tool for office workers, researchers, and administrators. They may have heard “everyone should learn to code” but not know how to get started or how exactly coding will practical for them. Automate is a Python book for people who have never programmed before and, while they don’t necessarily want to become software engineers, they want to know how they can make better use of their computers to save them time. It’s a programming book that skips computer science and focuses on practical application.

Which version of Python is the book written for? One FAQ is which version of Python should one learn/use? Does it make a big difference from a beginner's standpoint ?

The book uses Python 3, specifically, 3.4. There have been some backwards-incompatible changes (for the better) that were made between the 2.x and 3.x versions, but Python 3 was introduced several years ago now. The reasons to stall on upgrading, such as lack of Python 3-supporting modules, mostly no longer apply. The only excuse to use Python 2 is if you have a large existing program written in 2 that, despite all of the Python 3 migration tools that are available, can’t easily be upgraded.

But these are all technical details that don’t really apply to beginners; just use 3.

How has the experience been of writing a book for a publisher vis-a-vis self-publishing for you ?

As great as self-publishing and print-on-demand have been, it has been excellent working with a publisher like No Starch Press. I soon learned that when writing a book the writing was the easiest part. Editing, formatting, layout, and especially marketing were all hats that I had to wear when self-publishing, and at times I wore them poorly.

Self-publishing is an excellent gateway for new, unknown writers, and a good way to get feedback and experience of the entire process by frequently releasing short works. But working with a professional team let me focus on the content, and the final book is of much higher quality than the ones I produced on my own.

Having spent a large part of my career creating Visual Programming Language for Kids, I feel Visual Programming Languages like Scratch, CiMPLE, etc. are a better first step for children vis-a-vis Python. From a pedagogy standpoint, what are your thoughts on this ?

I am a huge advocate of Scratch in particular. It’s clear that they’ve done a lot of research to make their user interface on a par with Apple’s best products. Scratch is great for the 8–12 age group. It has instant, graphical feedback and the snap-together block interface makes it literally impossible to make syntax mistakes from typos. It’s incredibly frustrating for a child to slowly type out a command, only to be greeted with an obscure error message because of a typo. But from discussions with other educators, kids can quickly outgrow the limitations of Scratch. Teenagers are more than capable of putting together programs in a high-level language like Python (or even younger, if they are enthusiastic about programming).

Categories: FLOSS Project Planets

PyCharm: PyCon 2015: How It All Happened

Planet Python - Thu, 2015-04-23 09:13

What a whirlwind the past couple weeks have been! We just got back from Montreal, Canada, which hosted PyCon 2015. The conference was awesome and the people just amazing! Let me tell you all about our experience (and announce the winners of our license raffle!).

 

Five of us from the PyCharm team attended this conference:

From left to right: Anna Morozova, Dmitry Trofimov, Ekaterina Tuzova, Andrey Vlasovskikh, Dmitry Filippov

Lots of things kept us busy during the conference days.

First, we attended the Python Language Summit where the latest trends of the Python language were discussed, such as the development of the new PEP 484 for type hinting. As you may already know, at PyCharm we’re constantly improving our static code analysis to provide you with better autocompletion, navigation, code inspections, refactorings and so forth. We have our own way to infer types and the annotations format to help us better understand your code. At the summit, we shared the challenges we’ve come across with static analysis, as well as some of our ideas for improving Python to be more friendly to static analysis tools like PyCharm. For a long while we at PyCharm had been actively working on co-creation of the new PEP 484, so it was great to talk face-to-face with core Python developers like Guido van Rossum, Jukka Lehtosalo, and Łukasz Langa. We took a pulse of the current developments and discussed further arrangements. At the end of the conference, Guido did a closing keynote about type hints and the future plans of the community. So, many exciting things to look forward to in this area!

Python Education Summit was another fantastic event. We learned a lot about how Python is being used to teach programming to kids, novice programmers and those for whom Python is a second language. Our lightning talk covering the brand-new PyCharm Educational Edition was received very well by the community. It helped us understand where we’re all heading and what the current needs are in this domain. We’re proud that PyCharm Educational Edition is evolving in the right direction. Talking to teachers and developers who are passionate about nurturing novice developers, we agreed to collaborate on getting Python-rich curriculums adopted in some local US schools, which is one of the major problems in the community right now.

The main PyCon 2015 conference was just awesome. We felt privileged to be a part of the community that is setting a standard every day in the world of Python programming. We had a booth in the expo hall, participated in development sprints and talked to hundreds of people, answering questions about our company, products, PyCharm itself and collecting valuable feedback:

Having kicked off PyCharm 4.5 EAP prior to the conference, we were able to show off some new stuff we’d worked on lately.

It was also a great chance to meet a lot of our happy users and friends as well as new people passionate about Python and new technologies. To all the conference attendees, we say “Thank you!”. We hope you enjoyed it as much as we did!

We also held a PyCharm license raffle at the conference, with some cool swag up for grabs. Today we’re glad to give you the randomly chosen winners:

  • Natasha Scott
  • Steven D Gonzales
  • Thomas Kluyver
  • Jason Oliver
  • Luke Petschauer
  • Jeremy Ehrhardt
  • John Hagen

Congratulations! Your license notifications will be mailed out to you in the coming days.

If you didn’t win, you’ll still get a 15% discount for a new PyCharm personal license (if you ticked that option in the raffle application). We will get in touch with you soon with more information on how to redeem your personal discount.

With any problems, concerns or questions, ping us in comments below.

Develop with pleasure!
-PyCharm team

Categories: FLOSS Project Planets

Jim Birch: Drupal 7: Simplify

Planet Drupal - Thu, 2015-04-23 09:00

The Drupal Simplify Module is a big help removing cruft from the eyes of the administrator in the Drupal UI.  Simplify allows you to hide certain fields from the user interface on a global basis, or configured for each node type, taxonomy, block, comment, or user.

What sent me looking for a module like this was the "Text Format" selection beneath every single WYSIWYG on the site.  While I think Drupal is incredible for allowing multiple input formats, 99 out of 100 times, I define which ONE input format a user can use per role.  So having this as an option beneath every rich text editor on the site just became wasted space that I wanted to remove.  And so I did!

But wait, there's more!  Simplify lets you hide so much more than that!  The following items can be hidden:

  • Administrative overlay (Users)
  • Authoring information
  • Book outline
  • Comment settings
  • Contact settings (Users)
  • Menu settings
  • Publishing options
  • Relations (Taxonomy)
  • Revision information
  • Text format selection
  • URL alias (Taxonomy)
  • URL path settings
  • Meta tags
  • URL redirects
  • XML sitemap

Install Simplify for Drupal 7

Read more

Categories: FLOSS Project Planets

Jim Jagielski: Being the Old One

Planet Apache - Thu, 2015-04-23 08:08

This may come off as "sour grapes", or being defensive, but that's not that case. Instead, it's to show how, even now, when we should know better, technological decisions are sometimes still based on FUD or misinformation, and sometimes even due to "viral" marketing. Instead of making a choice based on "what is best for us", decisions are being made on "what is everybody else doing" or, even worse, "what's the new hotness." Sometimes, being the Old One ain't easy, since you are seen as past your prime.

Yeah, right.

Last week, I presented at ApacheCon NA 2015, and one of my sessions was "What's New In Apache httpd 2.4." The title itself was somewhat ironic, since 2.4 has been out for a coupla years already. But in the presentation, I address this up front: the reason why the talk was (and is) still needed is that people aren't even looking at httpd anymore. And they really should be.

Now don't get me wrong, this isn't about "market share" or anything as trivial as that. Although we want as many people to use Apache projects as possible, we are much more focused about creating quality software, and not so much about being a "leader" in that software space. And we are well aware that there are use cases where httpd is not applicable, or the best solution, and that's great too.

First of all, there is still this persistent claim that httpd is slow... of course, part of the question is "slow how?" If you are concerned about request/response time, then httpd allows you to optimize for that, and provides the Prefork MPM. Sure, this MPM is more heavyweight, but it also provides the lowest latency and fastest transaction times possible. If instead you are concerned about concurrency, then the Event MPM is for you. The Event MPM in 2.4 has performance equal to other performance-focused Web Servers, such as Apache Traffic Server and NGINX. Even so, whenever you hear people talk about httpd, the first thing they will say is that "Apache is slow, where-as 'foo' was built for speed."

There is also the claim that httpd is too feature-full (or bloated)... Well, I guess we could say "guilty as charged." One of the design goals of httpd is to be a performant, general web-server. So it includes lots of features that one would want, or need, at the web-server layer. So yes, it includes caching, and proxy capability, and in-line content filtering, and authentication/authorization, and TLS/SSL (frontend and reverse-proxy), and in-server language support, etc... But if you don't need any of these capabilities, you simply don't load the modules in; the module design allows you to pick and choose what capabilities you do, or don't want, which means that your httpd instance is specific to your needs. If you want a web-server with all the bells and whistles, great. But if you want just a barebones, fast reverse proxy, you can have that too. Of course, I won't go into the irony of httpd being "blasted" for being bloated, while the hotness-of-the-day is praised for adding features that httpd has had for years. *grin*

Finally, we get to the main point: That Apache httpd is old, after all, we just celebrated our 20th anniversary; httpd is "old and busted", Foo is "new hotness"(*). Well, again, guilty as charged. The Apache httpd project is old, but httpd itself isn't. It is designed for the needs of today's, and tomorrow's, web: Async/event-driven design (if required), dynamic reverse-proxying with advanced load-balancing (and end-to-end TLS/SSL support), run-time dynamic configuration, LUA support (module development and runtime), etc...

You know what else is old? Linux.

Makes you think, huh?

Categories: FLOSS Project Planets

Leonardo Giordani: Python decorators: metaprogramming with style

Planet Python - Thu, 2015-04-23 08:00

This post is the result of a lot of personal research on Python decorators, meta- and functional programming. I want however to thank Bruce Eckel and the people behind the open source book "Python 3 Patterns, Recipes and Idioms" for a lot of precious information on the subject. See the Resources section at the end of the post to check their work.

Is Python functional?

Well, no. Python is a strong object-oriented programming language and is not really going to mix OOP and functional like, for example, Scala (which is a very good language, by the way).

However, Python provides some features taken from functional programming. Generators and iterators are one of them, and Python is not the only non pure functional programming language to have them in their toolbox.

Perhaps the most distinguishing feature of functional languages is that functions are first-class citizens (or first-class objects). This means that functions can be passed as an argument to other functions or can be returned by them. Functions, in functional languages, are just one of the data types available (even if this is a very rough simplification).

Python has three important features that allows it to provide a functional behaviour: references, function objects and callables.

References

Python variables share a common nature: they are all references. This means that variables are not typed per se, being pure memory addresses, and that functions do not declare the incoming data type for arguments (leaving aside gradual typing). Python polymorphism is based on delegation, and incoming function arguments are expected to provide a given behaviour, not a given structure.

Python functions are thus ready to accept every type of data that can be referenced, and functions can.

Read this post to dive into delegation-based polymorphism and references in Python.

Functions objects

Since Python pushes the object-oriented paradigm to its maximum, it makes a point of always following the tenet everything is an object. So Python functions are objects as you can see from this simple example

>>> def f(): ... pass ... >>> type(f) <class 'function'> >>> type(type(f)) <class 'type'> >>> type(f).__bases__ (<class 'object'>,) >>>

Given that, Python does nothing special to treat functions like first-class citizens, it simply recognizes that they are objects just like any other thing.

Callables

While Python has the well-defined function class seen in the above example, it relies more on the presence of the __call__ method. That is, in Python any object can act as a function, provided that it has this method, which is invoked when the object is "called".

This will be crucial for the discussion about decorators, so be sure that you remember that we are usually more interested in callable objects and not only in functions, which, obviously, are a particular type of callable objects (or simply callables).

The fact that functions are callables can also be shown with some simple code

>>> def f(): ... pass ... >>> f.__call__ <method-wrapper '__call__' of function object at 0xb6709fa4> Metaprogramming

While this is not a post on languages theory, it is worth spending a couple of words about metaprogramming. Usually "programming" can be seen as the task of applying transformations to data. Data and functions can be put together by an object-oriented approach, but they still are two different things. But you soon realize that, as you may run some code to change data, you may also run some code to change the code itself.

In low level languages this can be very simple, since at machine level everything is a sequence of bytes, and changing data or code does not make any difference. One of the most simple examples that I recall from my x86 Assembly years is the very simple self obfuscation code found is some computer viruses. The code was encrypted with a XOR cipher and the first thing the code itself did upon execution was to decrypt its own code and then run it. The purpose of such tricks was (and is) to obfuscate the code such that it would be difficult for an antivirus to find the virus code and remove it. This is a very primitive form of metaprogramming, since it recognizes that for Assembly language there is no real distinction between code and data.

In higher lever languages such as Python achieving metaprogramming is no more a matter of changing byte values. It requires the language to treat its own structures as data. Every time we are trying to alter the behaviour of a language part we are actually metaprogramming. The first example that usually comes to mind are metaclasses (probably due to the "meta" word in their name), which are actually a way to change the default behaviour of the class creation process. Classes (part of the language) are created by another part of the language (metaclasses).

Decorators

Metaclasses are often perceived as a very tricky and dangerous thing to play with, and indeed they are seldom required in Python, with the most notable exception (no pun intended) being the Abstract Base Classes provided by the collections module.

Decorators, on the other side, are a feature loved by many experienced programmers and after their introduction the community has developed a big set of very interesting use cases.

I think that the first approach to decorators is often difficult for beginners because the functional version of decorators are indeed a bit complex to understand. Luckily, Python allows us to write decorators using classes too, which make the whole thing really easy to understand and write, I think.

So I will now review Python decorators starting from their rationale, then looking at class-based decorators without arguments, class-based decorators with arguments, and finally moving to function-based decorators.

Rationale

What are decorators, and why should you learn how to define and use them?

Well, decorators are a way to change the behaviour of a function or a class, so they are actually a way of metaprogramming, but they make it a lot more accessible than metaclasses do. Decorators are, in my opinion, a very natural way of altering functions and classes.

Moreover, with the addition of some syntactic sugar, they are a very compact way to both make changes and signal that those changes have been made.

The best syntactic form of a decorator is the following

@dec def func(*args, **kwds): pass

where dec is the name of the decorator and the function func is said to be decorated by it. As you can see any reader can quickly identify that the function has a special label attached, thus being altered in its behaviour.

This form, however, is just a simplification of the more generic form

def func(*args, **kwds): pass func = dec(func)

But what actually ARE the changes that you may want do to functions or classes? Let us stick for the moment to a very simple task: adding attributes. This is by no means a meaningless task, since there are many practical use cases that make use of it. Let us first test how we can add attributes to functions in plain Python

>>> def func(): ... pass ... >>> func.attr = "a custom function attribute" >>> func.attr 'a custom function attribute'

and to classes

>>> class SomeClass: ... pass ... >>> SomeClass.attr = "a custom class attribute" >>> SomeClass.attr 'a custom class attribute' >>> s = SomeClass() >>> s.attr 'a custom class attribute'

As you can see adding attributes to a class correctly results in a class attribute, which is thus shared by any instance of that class (check this post for some explanations about class attributes and sharing).

Class-based decorators without arguments

As already explained, Python allows you to call any object (as you do with functions) as long as it provides the __call__() method. So to write a class-based decorator you just need to create an object that defines such a method.

When used as a decorator, a class is instantiated at decoration time, that is when the function is defined, and called when the function is called.

class CustomAttr: def __init__(self, obj): self.attr = "a custom function attribute" self.obj = obj def __call__(self): self.obj()

As you can see there is already a lot of things that shall be clarified. First of all the class, when used as a decorator, is initialized with the object that is going to be decorated, here called obj (most of the time it is just called f for function, but you know that this is only a special case).

While the __init__() method is called at decoration time, the __call__() method of the decorator is called instead of the same method of the decorated object. In this case (decorator without arguments), the __call__() method of the decorator does not receive any argument. In this example we just "redirect" the call to the original function, which was stored during the initialization step.

So you see that in this case we have two different moments in which we may alter the behaviour of the decorated objects. The first is at its definition and the second is when it is actually called.

The decorator can be applied with the simple syntax shown in a previous section

@CustomAttr def func(): pass

When Python parses the file and defines the function func the code it executes under the hood is

def func(): pass func = CustomAttr(func)

according to the definition of decorator. This is why the class shall accept the decorated object as a parameter in its __init__() method.

Note that in this case the func object we obtain after the decoration is no more a function but a CustomAttr object

>>> func <__main__.CustomAttr object at 0xb6f5ea8c>

and this is why in the __init__() method I attached the attr attribute to the class instance self and not to obj, so that now this works

>>> func.attr 'a custom function attribute'

This replacement is also the reason why you shall also redefine __call__(). When you write func() you are not executing the function but calling the instance of CustomAttr returned by the decoration.

Class-based decorators with arguments

This case is the most natural step beyond the previous one. Once you started metaprogramming, you want to do it with style, and the first thing to do is to add parametrization. Adding parameters to decorators has the only purpose of generalizing the metaprogramming code, just like when you write parametrized functions instead of hardcoding values.

There is a big caveat here. Class-based decorators with arguments behave in a slightly different way to their counterpart without arguments. Specifically, the __call__() method is run during the decoration and not during the call.

Let us first review the syntax

class CustomAttrArg: def __init__(self, value): self.value = value def __call__(self, obj): obj.attr = "a custom function attribute with value {}".format(self.value) return obj @CustomAttrArg(1) def func(): pass

Now the __init__() method shall accept some arguments, with the standard rules of Python functions for named and default arguments. The __call__() method receives the decorated object, which in the previous case was passed to the __init__() method.

The biggest change, however is that __call__() is not run when you call the decorated object, but immediately after __init__() during the decoration phase. This results in the following difference: while in the previous case the decorated object was no more itself, but an instance of the decorator, now the decorated objects becomes the return value of the __call__() method.

Remember that, when you call the decorated object, you are now actually calling what you get from __call__() so be sure of returning something meaningful.

In the above example I stored one single argument in __init__(), and this argument is passed to the decorator when applying it to the function. Then, in __call__(), I set the attribute of the decorated object, using the stored argument, and return the object itself. This is important, since I have to return a callable object.

This means that, if you have to do something complex with the decorated object, you may just define a local function that makes use of it and return this function. Let us see a very simple example

class CustomAttrArg: def __init__(self, value): self.value = value def __call__(self, obj): def wrap(): # Here you can do complex stuff obj() # Here you can do complex stuff return wrap @CustomAttrArg(1) def func(): pass

Here the returned object is no more the decorated one, but a new function wrap() defined locally. It is interesting to show how Python identifies it

>>> @CustomAttrArg(1) ... def func(): ... pass ... >>> func <function CustomAttrArg.__call__.<locals>.wrap at 0xb70185cc>

This pattern enables you to do every sort of things with the decorated object. Not only to change it (adding attributes, for example), but also pre- or post- filtering its results. You may start to understand that what we called metaprogramming may be very useful for everyday tasks, and not only for some obscure wizardry.

Decorators and prototypes

If you write a class-based decorator with arguments, you are in charge of returning a callable object of choice. There is no assumption on the returned object, even if the usual case is that the returned object has the same prototype as the decorated one.

This means that if the decorated object accepts zero arguments (like in my example), you usually return a callable that accepts zero arguments. This is however by no means enforced by the language, and through this you may push the metaprogramming technique a bit. I'm not providing examples of this technique in this post, however.

Function-based decorators

Function-based decorators are very simple for simple cases and a bit trickier for complex ones. The problem is that their syntax can be difficult to grasp at first sight if you never saw a decorator. They are indeed not very different from the class-based decorators with arguments case, as they define a local function that wraps the decorated object and return it.

The case without arguments is always the simplest one

def decorate(f): def wrap(): f() return wrap @decorate def func(): pass

This behaves like the equivalent case with classes. The function is passed as an argument to the decorate() function by the decoration process that calls it passing the decorated object. When you actually call the function, however, you are actually calling wrap().

As happens for class-based decorators, the parametrization changes the calling procedure. This is the code

def decorate(arg1): def wrap(f): def _wrap(arg): f(arg + arg1) return _wrap return wrap @decorate(1) def func(arg): pass

As you see it is not really straightforward, and this is the reason I preferred to discuss it as the last case. Recall what we learned about class-based decorators: the first call to the decorate() function happens when the decorator is called with an argument. Thus @decorate(1) calls decorate() passing 1 as arg1, and this function returns the wrap() local function.

This second function accepts another function as an argument, and indeed it is used in the actual decoration process, which can be represented by the code func = wrap(func). This wrap() function, being used to decorate func(), wants to return a compatible object, that is in this case a function that accepts a single argument. This is why, in turn, wrap() defines and returns a _wrap() local function, which eventually uses both the argument passed to func() and the argument passed to the decorator.

So the process may be summarized as follows (I will improperly call func_dec the decorated function to show what is happening)

  • @decorator(1) returns a wrap() function (that knows the argument)
  • func is redefined as func_dec = wrap(func) becoming _wrap()
  • When you call func_dec(arg) Python executes _wrap(arg) which calls the original func() passing 1 + arg as argument

Obviously the power of the decorator concept is that you are not dealing with func_dec() but with func() itself, and all the "magic" happens under the hood.

If you feel uncomfortable with function-based decorators don't worry, as they are indeed a bit awkward. I usually stick to function based decorators for the simple cases (setting class attributes, for example), and move to class-based ones when the code is more complex.

Example

A good example of the power of decorators which comes out of the box is functools.total_ordering. The functools module provides a lot of interesting tools to push the functional approach in Python, most notably partial() and partialmethod(), which are however out of the scope of this post.

The total_ordering decorator (documented here) wants to make an object provide a full set of comparison ordering methods starting from a small set of them. Comparison methods are those methods Python calls when two objects are compared. For example when you write a == b, Python executes a.__eq__(b) and the same happens for the other five operators > (__gt__), < (__lt__), >= (__ge__), <= (__le__) and != (__ne__).

Mathematically all those operators may be expressed using only one of them and the __eq__() method, for example __ne__ is !__eq__ and __lt__ is __le__ and !__eq__. This decorator makes use of this fact to provide the missing methods for the decorated object. A quick example

class Person: def __init__(self, name): self.name = name def __eq__(self, other): return self.name == other.name def __lt__(self, other): return self.name < other.name

This is a simple class that defines the == and < comparison methods.

>>> p1 = Person('Bob') >>> p2 = Person('Alice') >>> p1 == p2 False >>> p1 < p2 False >>> p1 >= p2 Traceback (most recent call last): File "/home/leo/prova.py", line 103, in <module> p1 >= p2 TypeError: unorderable types: Person() >= Person()

A big warning: Python doesn't complain if you try to perform the > and != comparisons but lacking the dedicated methods it does perform a "standard" comparison. This means that, as the documentation states here, "There are no implied relationships among the comparison operators. The truth of x==y does not imply that x!=y is false."

With the total_ordering decorator, however, all six comparisons become available

import functools @functools.total_ordering class Person: def __init__(self, name): self.name = name def __eq__(self, other): return self.name == other.name def __lt__(self, other): return self.name < other.name >>> p1 = Person('Bob') >>> p2 = Person('Alice') >>> p1 == p2 False >>> p1 != p2 True >>> p1 > p2 True >>> p1 < p2 False >>> p1 >= p2 True >>> p1 <= p2 False Final words

Decorators are a very powerful tool, and they are worth learning. This may be for you just the first step into the amazing world of metaprogramming or just an advanced technique you may use to simplify your code. Whatever you do with them be sure to understand the difference between the two cases (with and without arguments) and don't avoid function-based decorators just because their syntax is a bit complex.

Resources

Many thanks to Bruce Eckel for his three posts which have been (I think) the source for the page on Python 3 Patterns, Recipes and Idioms (this latter is still work in progress).

A good source of advanced decorators can be found at the Python Decorator Library, and a lot of stuff may be found on Stackoverflow under the python-decorators tag.

Feedback

Feel free to use the blog Google+ page to comment the post. The GitHub issues page is the best place to submit corrections.

Categories: FLOSS Project Planets

Intellimath blog: AXON 0.6

Planet Python - Thu, 2015-04-23 07:33

There is new 0.6 release of pyaxon — python library for AXON.

Read more… (3 min remaining to read)

Categories: FLOSS Project Planets

[Howto] LDAP schema for Postfix

Planet KDE - Thu, 2015-04-23 07:27

The official Postfix documentation to use LDAP for user and alias lookup mentions certain LDAP attributes which are not part of the default OpenLDAP. In this article I will shortly explain a basic theme providing these attributes and the corresponding object class.

Postfix can easily be connected to LDAP to lookup addresses and aliases. The Postfix LDAP documentation covers all the details. As mentioned there the default configuration of Postfix expects two LDAP attributes in the LDAP schema: mailacceptinggeneralid and maildrop. This also shows in the code in src/global/dict_ldap.c:

dict_ldap->query = cfg_get_str(dict_ldap->parser, "query_filter", "(mailacceptinggeneralid=%s)", 0, 0);

However, these attributes are not part of the default OpenLDAP installation, and the Postfix documentation does not mention how exactly that has to look like and where to get it. For that reason we at my employee credativ provide such a schema at Github: github.com/credativ/postfix-ldap-schema. The github repository contains the schema, the corresponding licence and a short documentation. A German introduction to the schema can also be found at credativ’s blog: LDAP-Schema für Postfix-Abfragen

The provided schema defines the necessary attribute types mailacceptinggeneralid and maildrop as well as the object class postfixUser. Please note that in this schema the used OIDs are of the type Experimental OpenLDAP, see also the OID database.

To use the schema it must be used by OpenLDAP, for example by including in in slapd.conf. A corresponding LDAP entry could look like:

dn: uid=mmu,ou=accounts,dc=example,dc=net objectclass: top objectclass: person objectclass: posixAccount objectclass: postfixUser cn: Max Mustermann sn: Mustermann uid: mmu uidNumber: 5001 gidNumber: 5000 homeDirectory: /home/vmail mailacceptinggeneralid: mmu mailacceptinggeneralid: max.mustermann mailacceptinggeneralid: m.mustermann mailacceptinggeneralid: bugs maildrop: mmu

As you see the example covers multiple aliases. Also, the final mailbox is a domain less entity: maildrop: mmu does not mention any domain name. This only works if your mail boxes actually do not require (or even allow) domain names – in this case this was true since the mail is finally transported to a Dovecot server which does not know about the various domains.

Please note that this schema can only be the foundation for a more sophisticated, more complex schema which need to be tailored to fit the individual needs of the corresponding setup.


Filed under: Business, E-Mail, HowTo, Linux, Technology, Thoughts
Categories: FLOSS Project Planets

Calligra @ conf.kde.in

Planet KDE - Thu, 2015-04-23 07:26
Reading maketh a full man ; conference a ready man ; and writing an exact man. -Francis Bacon
It feels great to learn that KDE conf India has grown bigger and bigger every year and so has the KDE India community. A lot of new open source enthusiasts can be seen around who are willing to contribute as much as they can. Our aim of hosting KDE conf in India is to provide a platform for the enthusiasts to begin their journey in open source and know more about KDE, its awesome applications, features and how one can contribute.
This year the conference was hosted in the "God's Own Country" Kerala. Amritapuri University, known for its active involvement in various FOSS activities, hosted the conference. They had a beautiful campus situated right beside the beach and an island at a 5 min walk. 

Beach & the island!Noufal Ibrahim, founder of Pycon India, opened the conference with keynote talk on the power of Command Line. His interests were not totally aligned with those of KDE but still he managed to motivate the students to chose Linux over other proprietary OS. This talk was followed by Shantanu and Pradeepto's talk where they explained the philosophy behind KDE, its applications and how beautifully it is designed for users and developers. This talk set the right mood for the audience to pick up KDE and gave them a background for the talks to follow. There were various talks where speakers explained the products they have been working on and motivated the enthusiasts to contribute. Day 1 ended with Qt Workshop which allowed the attendees to get a taste of Qt and gain confidence by working on real world software.
Day 2 began with a talk on Marble where Sanjiban Bairagya took us through a virtual tour of Earth and Moon and showed us the beauty of Marble. This talk was followed by Sinny Kumari showing us how to build applications on Qt and port it for Android and how to release the app on Google Play Stores. 
This talk was followed by my talk on Calligra. The aim of delivering this talk was to make the audience aware of Calligra Suite. I began the talk by asking the audience if they had used Calligra before and surprisingly majority of the audience were hearing the name for the first time. So, giving a talk on Calligra was perfect as at the end of the talk listerners would be curious to use the new software. The talk began with introducing them to Calligra and giving them a brief idea of the history behind building the application. Once the audience got a hang of what Calligra is , I went ahead and showcased the applications and asked the audience to guess the alternative applications they had used in place of Calligra (eg. Libreoffice Writer in place of Calligra Words) and they judged it all correctly. So, I was happy that Calligra Applications have perfect self-explanatory names. Then I went ahead and gave them a demo of Calligra Flow, showed them beautiful art work possible in Krita (by David Revoy) and introduced them to Calligra Author ( whose concept seems very interesting to me). Then I introduced them to Sheets, Words, Kexi, Braindump, Stage, Plan & Karbon. I suggested them to use Plan, Flow , Krita and Words for their everyday use and for Univerisity projects.
Talk on CalligraNow the audience seemed pretty enthusiastic to know more, so I went deep and explained them Calligra Architecture and how sharing libraries helps us build and maintain huge code with optimum effort. The perfect example here was that Calligra built 10 application with 4.1MLoc and Libreoffice built 6 applications with 7.16 MLoc. It clearly shows how good & powerful Qt is which allows new developers to start working straight away. As the story goes, its not just developers who can contribute to the project, I explained them how they can also contribute by bug reporting, translating and documenting. In the end I showed them Calligra Sprint group photos to show how developers from round the globe come together with a common vision to develop Calligra. 
My talk was followed by a talk on Qt little games by Rishab arora where he explained game theory and demonstrated a game which he built using Qt. Post lunch on Day 2, Devaja Shah spoke about KDE Galaxy, Ashish talked about MPRIS support for multimedia applications and Karan introduced Trojita. 
All the talks got good response and there were a lot of enthusiastic students who wanted to contribute straight away by writing code & committing patches.
I would like to thank Boudewijn Rempt, Inge Wallin, Shantanu Tushar for giving me tips on making the presentation. Inge spent a lot of time explaining me the concept and vision behind Calligra Author which I had never used before myself. It is still developing but has an amazing concept. 
I would also like to thank KDE & Red Hat for making this talk possible. 
 

It is always awesome for me to meet KDE folks from all over the country. Every year I see some new faces and some old faces and it is satisfying to realize that it has been years since my journey started with KDE and the energy level hasn't gone down. 
KDE India Group & Volunteers

Categories: FLOSS Project Planets
Syndicate content