FLOSS Project Planets

qed42.com: New Module - Referral Discount for Drupal Commerce

Planet Drupal - Mon, 2016-10-17 04:25
New Module - Referral Discount for Drupal Commerce Body

One of the popular Growth hacking technique for e-commerce and SaaS businesses is Referrals, which is to leverage your user's network to get new users by offering incentives / discounts. On a recent e-commerce project we had the requirement to create a complete referral system but couldn't find a module that came close to fulfilling the requirements, hence we developed and contributed Commerce Referral Discount. This module allows us to provide a discount credit to an existing user for referring new users. Discounts can be configured for both the referring user and the new user who joins as part of the referral. Lets see a typical flow:

  • User A invites user B to signup on website using a unique invite URL (http://yoursite.com/invite/username).

  • User B visits the site using this URL, and is taken to the registration form to create a new account.

  • User B gets some discount amount (say $5) which he could use on his first purchase.


  • After user B makes his first purchase, user A gets a discount amount (say $10), which could be used in the next purchase.

  • Both discount amounts are configurable from the admin backend.

Module Configuration:

  • To configure the discount amounts browse to /admin/config/referral-discount


  • Configure referral discount on product purchase at Administration » Store settings » Promotions » Discounts:

    • Go to admin/commerce/discounts page.

    • Click on "Add discount" button.

    • Choose discount type : Order discount

    • Choose offer type : Referral Discount

    • Now click on Save discount.


  • Configure Invite/Referral Link block

    • Visibility settings: Show block for authenticated user.

Commerce Referral Discount module provide "Invite/Referral Link" block, which contains unique refer/invite link for authenticated users to share across their friends. 

The module also integrates with views:

  1. It provides a view type 'Commerce Referral Discount' which can be used to list down all the discounts and other data which it stores in the 'commerce_referral_discount' table in database.
  2. It also provides relationship to the user entity, so you can also include the data of Referrer user and Invited user.


    Nikhil Banait Mon, 10/17/2016 - 13:55
    Categories: FLOSS Project Planets

    Janez Urevc: Drupal 8.2.0 and composer wonderland

    Planet Drupal - Mon, 2016-10-17 04:22
    Drupal 8.2.0 and composer wonderland

    Over the weekend I took some time to update this site to the latest and greatest Drupal. Update itself was pretty straightforward and it went without any problems (great work Drupal community!).

    More interesting part was something that I wanted to do for a while. Until now I was using old-school approach repo with all modules and other dependencies committed in it. Most of you probably already heard about the Composer. Thanks to Florian Weber (@webflo) and other contributors it is now fairly easy to manage your Drupal projects with it. There is a Composer template for Drupal projects available which will give you everything you need to get started. It took me just a good hour to fully convert my project (I am no Composer expert). I found this approach very nice and convenient and will be using it for all my future projects.

    As part of this I also worked on a pull request for Drupal docker project that makes docroot location configurable, which is a requirement for Composer driven projects.

    slashrsm Mon, 17.10.2016 - 10:22 Tags Drupal Docker Composer Enjoyed this post? There is more! Drupal dev environment on Docker janezurevc.name runs on Drupal 8! Call for Drupal 8 media ecosystem co-maintainers
    Categories: FLOSS Project Planets

    Vardot: DrupalCon Dublin 2016 - What Drupal means to us?

    Planet Drupal - Mon, 2016-10-17 04:09
    Events Read time: 1 minute

    The DrupalCon in Dublin is over – and now it’s officially. We all had fun, enjoyed sessions, sprints, visited all the booths, collected prizes, and had a lot of great talks with drupalists from all over the globe. All the expectations we had before this event came true, and now it’s time to make some conclusions.

    But instead of writing a long recap, we at Vardot decided to remind you about the coolest moments of the conference in one short video. It's  better to see once than hear a hundred times – so enjoy!



    Did you find yourself in a video? If not, how would you describe Drupal with one word? Your opinion counts. What were the most exciting moments of the conference for you? Share your emotions in a comments section below and see you next year in Vienna!


    Tags:  DrupalCon Drupal Planet Title:  DrupalCon Dublin 2016 - What Drupal means to us?
    Categories: FLOSS Project Planets

    Drupalize.Me: Load Testing Our Site on Pantheon

    Planet Drupal - Mon, 2016-10-17 03:07

    I did some load testing to try and answer the question; How did moving our site from Linode to Pantheon affect the performance-measured in response time-of our site for both members and non-members?

    Categories: FLOSS Project Planets

    Aurelien Navarre: From $conf to $config and $settings in Drupal 8

    Planet Drupal - Mon, 2016-10-17 02:37

    With Drupal 7 we were used to leverage $conf to set variable overrides. Unfortunately this was sub-optimal. The Configuration override system documentation page says it best:

    A big drawback of that system was that the overrides crept into actual configuration. When a configuration form that contained overridden values was saved, the conditional override got into the actual configuration storage.

    Using $conf on Drupal 7

    Let's say you wish to query the {variable} table for the site name.

    mysql> SELECT name,value FROM variable WHERE name = 'site_name'; +-----------+----------------------+ | name | value | +-----------+----------------------+ | site_name | s:12:"My site name"; | +-----------+----------------------+ 1 row in set (0.00 sec)

    You can quickly get the information you want and confirm this is indeed the value your site returns at /admin/config/system/site-information. Now, if you want to override the entry in code, you can put the below variable override in settings.php.

    // Override the site name. $conf['site_name'] = 'My new site name';

    When bootstrapping, Drupal would return the overridden site name with the original value left untouched in the database. Easy. Doing so also means the value provided in code wouldn't be modifiable from the Drupal administration interface any longer. Well, you could modify it, but it'd be overridden no matter what.

    In Drupal 8 the variable subsystem is gone and we now need to leverage either $config or $settings depending upon what we wish to achieve. Both are being set via Settings::Initialize - Let's explore each to understand the differences.

    When to use $config in Drupal 8?

    $config is brought to you by the Configuration Management system. To keep it really simple, think about variable overrides and call them configuration overrides instead. And as you can see below, there are indeed similarities:

    • You can globally override specific configuration values for the site.
    • Any values you provide in these variable overrides will not be viewable from the Drupal administration interface. (Don't think that makes sense? There's an issue for that.)

    But there are also notable differences. One I think worth mentioning is related to modules. In Drupal 7, it was easy to disable a module via $conf, by setting the below variable override.

    $conf['page_load_progress'] = 0;

    This was useful when - for instance - willing to have certain modules disabled in non-prod environments.

    In Drupal 8 though, overriding the list of installed modules in core.extension is not supported as module install or uninstall has not occurred and modules cannot be disabled anymore. There is a contrib module to try and bring back this functionality to Drupal 8, but be careful of unintended consequences on a production site.

    There are other particular configuration values that are risky to override, like field storage configuration which could lead to data loss. So, really, you have to make sure what is going to be overridden wouldn't have a negative impact on your site.

    In any case, the administration interface displays the values stored in configuration so that you can stage changes to other environments that don't have the overrides. If you do want to see the overrides, then there's the Drush --include-overridden argument to do just that.

    Let's say we wish to override the site name via settings.php.

    $config['system.site']['name'] = 'My new site name';

    To see the default and overridden values with Drush, just type the following.

    // Default value $ drush @site.env config-get system.site name 'system.site:name': 'My site name' // Overridden value $ drush @drucker.local config-get system.site name --include-overridden 'system.site:name': 'My new site name'

    Or you can use the Drupal API directly.

    // Default value >>> $site_name = \Drupal::config('system.site')->getOriginal('name', FALSE); => "My site name" // Overridden value >>> $site_name = \Drupal::config('system.site')->get('name', FALSE); => "My new site name" When to use $settings in Drupal 8?

    Let's try to understand the difference with $config. It's clearly explained in \Drupal\Core\Site\Settings

    Settings should be used over configuration for read-only, possibly low bootstrap configuration that is environment specific.

    settings.php tries to clarify that even more.

    $settings contains environment-specific configuration, such as the files directory and reverse proxy address, and temporary configuration, such as security overrides.

    In other words, this refers to settings that wouldn't exist in the Configuration Management system. Here's an example. When defining the private path, you don't have a choice but to define it via settings.php.

    $settings['file_private_path'] = '/mnt/private/files';

    Another example is API keys. You certainly don't want those or other sensitive information to be stored in the configuration and exported in YAML files and/or tracked under version control.

    Hopefully that clarifies the difference between $config and $settings. Let me know how you work with them and if I missed anything.

    Categories: FLOSS Project Planets

    Jaldhar Vyas: Something Else Will Be Posted Soon Also.

    Planet Debian - Mon, 2016-10-17 02:07

    Yikes today was Sharad Purnima which means there is about two weeks to go before Diwali and I haven't written anything here all year.

    OK new challenge: write 7 substantive blog posts before Diwali. Can I manage to do it? Let's see...

    Categories: FLOSS Project Planets

    Kracekumar Ramaraju: RC Week 0010

    Planet Python - Mon, 2016-10-17 02:04

    This week has been a mixed ride with the torrent client. I completed the two pending features seeding and UDP tracker. The torrent client has a major issue with downloading larger torrent file like ubuntu iso file. The client starts the downloads from a set of peers and slowly halts at sock.recv after exchanging a handful of packets. At this juncture CPU spikes to 100% when sock.recv blocks. Initially, the code relied on asyncio only features, now the code uses curio library. Next time you write async code in Python 3, I would suggest use curio. Curio’s single feature of tracking all tasks states is magical wand for debugging. The live debugging facility helped me track down the blocking part of my code. Here is how curio’s debug monitor looks


    I am sure; the bug should be a logical error in the code, or I am doing async completely wrong. I will travel with a bug for a day or two and see where I land. This task is slowly emerging the toughest bug I have faced.

    I am happy, people at RC are keen to help and assist in all different ways. In case you’re reading this and have asyncio expertise to assist me, I will glad to hear. The easiest way to replicate the bug is to check out the code, switch to curio branch, install requirements in Python 3.5 venv and run the command python cli.py download ~/Downloads/ubuntu-16.04.1-desktop-amd64.iso.torrent --loglevel=debug. After a minute or two you can see the program using 100% CPU in htop or top. Feel free to leave a comment or ping me in twitter.

    Categories: FLOSS Project Planets

    hypothesis.works articles: Another invariant to test for encoders

    Planet Python - Mon, 2016-10-17 02:00

    The encode/decode invariant is one of the most important properties to know about for testing your code with Hypothesis or other property-based testing systems, because it captures a very common pattern and is very good at finding bugs.

    But how do you go beyond it? If encoders are that common, surely there must be other things to test with them?

    Categories: FLOSS Project Planets

    Kushal Das: Event report: PyCon India 2016

    Planet Python - Mon, 2016-10-17 01:16

    This time instead of per day report, I will try to write about things happened during PyCon India. This time we had the conference at JNU, in Delhi. It was nice to be back at JNU after such a long time. The other plus point was about the chance to meet ilug-delhi again.

    Red Hat booth at PyCon

    We had booth duty during the conference. Thanks to Rupali, we managed to share the booth space with PyLadies. After the keynote the booth space got flooded with people. Many of them were students, or freshers looking for internship option. We also had queries about services provided by Red Hat. Just outside the booth we had Ganesh, SurajN, Trishna, , they were talking to every person visit our booth. Answering the hundreds of queries people had. It was nice to see how they were talking about working upstream, and inspiring students to become upstream contributor. I also did a talk Python usage in Red Hat family.

    PyLadies presence

    This was the first time we had PyLadies presence in PyCon India. You can read their experience from their blogs, 1, 2, 3. This presence was very important as it helped the community to learn about PyLadies. We saw the expectation of starting new chapters in different parts of the country. Nisha, Anwesha, Pooja, Rupali, Janki and the rest of the team managed to get an impromptu open space session, which I think was the best session on community I ever saw. Jeff Rush, Van Lindberg, Paul Everitt, Dmitry Filippov joined to share their experience in community.

    annual dgplug face to face meeting

    From dgplug.org we all meet face to face during PyCon India, we generally call the meeting as stair case meeting as we used to seat in the stair case of the Bangalore venue. This time we chose the seat in the ramp in the venue. We had a list of people coming, but as you can see in the photo below, the list of dgplug, and friends is ever growing. Sirtaj also came in during the meeting, and shared some valuable ideas with the students. I should mention VanL’s keynote at day one here. As he spoke about “failure”, which is something people don’t prefer to talk. It is very much important for the students to understand that failure is something to learn from, not to run away. Most of the students we talked later, had being able to understand the points Van made in his talk.

    Anwesha’s first talk at PyCon

    She already wrote about the talk in her blog. But I want to mention it again as it gave a new perspective to the developers present in the conference. For the students present in the conference who wanted to become upstream contributors, got a chance to learn about the binding point, the license. She talked about best practises at the end of her talk. Few days back I read another blog post about her talk (and the PyLadies).

    One can view all the photos in my flickr album.

    Categories: FLOSS Project Planets

    Russell Coker: Improving Memory

    Planet Debian - Mon, 2016-10-17 00:20

    I’ve just attended a lecture about improving memory, mostly about mnemonic techniques. I’m not against learning techniques to improve memory and I think it’s good to teach kids a variety of things many of which won’t be needed when they are younger as you never know which kids will need various skills. But I disagree with the assertion that we are losing valuable skills due to “digital amnesia”.

    Nowadays we have programs to check spelling so we can avoid the effort of remembering to spell difficult words like mnemonic, calendar apps on our phones that link to addresses and phone numbers, and the ability to Google the world’s knowledge from the bathroom. So the question is, what do we need to remember?

    For remembering phone numbers it seems that all we need is to remember numbers that we might call in the event of a mobile phone being lost or running out of battery charge. That would be a close friend or relative and maybe a taxi company (and 13CABS isn’t difficult to remember).

    Remembering addresses (street numbers etc) doesn’t seem very useful in any situation. Remembering the way to get to a place is useful and it seems to me that the way the navigation programs operate works against this. To remember a route you would want to travel the same way on multiple occasions and use a relatively simple route. The way that Google maps tends to give the more confusing routes (IE routes varying by the day and routes which take all shortcuts) works against this.

    I think that spending time improving memory skills is useful, but it will either take time away from learning other skills that are more useful to most people nowadays or take time away from leisure activities. If improving memory skills is fun for you then it’s probably better than most hobbies (it’s cheap and provides some minor benefits in life).

    When I was in primary school it was considered important to make kids memorise their “times tables”. I’m sure that memorising the multiplication of all numbers less than 13 is useful to some people, but I never felt a need to do it. When I was young I could multiply any pair of 2 digit numbers as quickly as most kids could remember the result. The big difference was that most kids needed a calculator to multiply any number by 13 which is a significant disadvantage.

    What We Must Memorise

    Nowadays the biggest memory issue is with passwords (the Correct Horse Battery Staple XKCD comic is worth reading [1]). Teaching mnemonic techniques for the purpose of memorising passwords would probably be a good idea – and would probably get more interest from the audience.

    One interesting corner-case of passwords is ATM PIN numbers. The Wikipedia page about PIN numbers states that 4-12 digits can be used for PINs [2]. The 4 digit PIN was initially chosen because John Adrian Shepherd-Barron (who is credited with inventing the ATM) was convinced by his wife that 6 digits would be too difficult to memorise. The fact that hardly any banks outside Switzerland use more than 4 digits suggests that Mrs Shepherd-Barron had a point. The fact that this was decided in the 60’s proves that it’s not “digital amnesia”.

    We also have to memorise how to use various supposedly user-friendly programs. If you observe an iPhone or Mac being used by someone who hasn’t used one before it becomes obvious that they really aren’t so user friendly and users need to memorise many operations. This is not a criticism of Apple, some tasks are inherently complex and require some complexity of the user interface. The limitations of the basic UI facilities become more obvious when there are operations like palm-swiping the screen for a screen-shot and a double-tap plus drag for a 1 finger zoom on Android.

    What else do we need to memorise?

    Related posts:

    1. Xen Memory Use and Zope I am currently considering what to do regarding a Zope...
    2. Improving Computer Reliability In a comment on my post about Taxing Inferior Products...
    3. Chilled Memory Attacks In 1996 Peter Gutmann wrote a paper titled “Secure Deletion...
    Categories: FLOSS Project Planets

    Daniel Bader: Click & jump to any file or folder from the terminal

    Planet Python - Sun, 2016-10-16 20:00
    Click & jump to any file or folder from the terminal

    iTerm2 for macOS has a little known feature that lets you open files and folders simply by Cmd+Clicking on them in the terminal. Among other things, this is super handy for debugging tests.

    With this so called Semantic History feature you can configure iTerm2 to open folders and files in their default application when you press Cmd and then click on them.

    So if you click on a folder name it will open in the Finder, and if you click on a .py file, for example, it will open in your editor.

    The amazingly cool part is that this also works with line numbers, so if you click on something like test_myapp.py:42 in the terminal your editor opens test_myapp.py and moves the cursor to line 42! 😀

    This unbelievably handy if you’re running your unit tests from the command line. I use it all the time to click and jump to failed test cases with the Pytest test runner, for example.

    Here’s how to set up Semantic History in iTerm2:

    • Open the iTerm2 preferences by clicking on iTerm2 → Preferences in the menu bar (or press Cmd+,)
    • Click on Profiles in the top row, then click Advanced all the way to the right. Find the section that says Semantic History.
    • Under Semantic History, set the first option to Open with editor… and then pick your favorite editor (I use Sublime Text 3).
    • Close the preferences window – that’s it!

    If you need some more help setting this up and a quick demo of what you can do with this feature, watch my video below:

    Like I said, I found this “click to jump to file” feature extremely helpful for working with tests.

    I usually run my Python tests with Pytest and it prints test failure messages in a format that iTerm2 understands. So I can simply Cmd+click on a failed test assertion and that’ll open up the test case Sublime Text, placing the cursor at the exact line that caused the test to fail.

    This feature should be completely language agnostic by the way. You’ll be able to use it with any test runner or programming language – and any editor.

    P.S. Unfortunately iTerm2 is only available on macOS. I’d love to learn if there’s a way to get the same functionality on Windows or Linux, so far I haven’t been able to find anything. If you know how to do this on Linux or Windows please get in touch and tell me how to do it :) Thanks!

    Categories: FLOSS Project Planets

    Thomas Goirand: Released OpenStack Newton, Moving OpenStack packages to upstream Gerrit CI/CD

    Planet Debian - Sun, 2016-10-16 17:28

    OpenStack Newton is released, and uploaded to Sid

    OpenStack Newton was released on the Thursday 6th of October. I was able to upload nearly all of it before the week-end, though there was a bit of hick-ups still, as I forgot to upload python-fixtures 3.0.0 to unstable, and only realized it thanks to some bug reports. As this is a build time dependency, it didn’t disrupt Sid users too much, but 38 packages wouldn’t build without it. Thanks to Santiago Vila for pointing at the issue here.

    As of writing, a lot of the Newton packages didn’t migrate to Testing yet. It’s been migrating in a very messy way. I’d love to improve this process, but I’m not sure how, if not filling RC bugs against 250 packages (which would be painful to do), so they would migrate at once. Suggestions welcome.

    Bye bye Jenkins

    For a few years, I was using Jenkins, together with a post-receive hook to build Debian Stable backports of OpenStack packages. Though nearly a year and a half ago, we had that project to build the packages within the OpenStack infrastructure, and use the CI/CD like OpenStack upstream was doing. This is done, and Jenkins is gone, as of OpenStack Newton.

    Current status

    As of August, almost all of the packages Git repositories were uploaded to OpenStack Gerrit, and the build now happens in OpenStack infrastructure. We’ve been able to build all packages a release OpenStack Newton Debian packages using this system. This non-official jessie backports repository has also been validated using Tempest.

    Goodies from Gerrit and upstream CI/CD

    It is very nice to have it built this way, so we will be able to maintain a full CI/CD in upstream infrastructure using Newton for the life of Stretch, which means we will have the tools to test security patches virtually forever. Another thing is that now, anyone can propose packaging patches without the need for an Alioth account, by sending a patch for review through Gerrit. It is our hope that this will increase the likeliness of external contribution, for example from 3rd party plugins vendors (ie: networking driver vendors, for example), or upstream contributors themselves. They are already used to Gerrit, and they all expected the packaging to work this way. They are all very much welcome.

    The upstream infra: nodepool, zuul and friends

    The OpenStack infrastructure has been described already in planet.debian.org, by Ian Wienand. So I wont describe it again, he did a better job than I ever would.

    How it works

    All source packages are stored in Gerrit with the “deb-” prefix. This is in order to avoid conflict with upstream code, and to easily locate packaging repositories. For example, you’ll find Nova packaging under https://git.openstack.org/cgit/openstack/deb-nova. Two Debian repositories are stored in the infrastructure AFS (Andrew File System, which means a copy of that repository exist on each cloud were we have compute resources): one for the actual deb-* builds, under “jessie-newton”, and one for the automatic backports, maintained in the deb-auto-backports gerrit repository.

    We’re using a “git tag” based workflow. Every Gerrit repository contains all of the upstream branch, plus a “debian/newton” branch, which contains the same content as a tag of upstream, plus the debian folder. The orig tarball is generated using “git archive”, then used by sbuild to produce binaries. To package a new upstream release, one simply needs to “git merge -X theirs FOO” (where FOO is the tag you want to merge), then edit debian/changelog so that the Debian package version matches the tag, then do “git commit -a –amend”, and simply “git review”. At this point, the OpenStack CI will build the package. If it builds correctly, then a core reviewer can approve the “merge commit”, the patch is merged, then the package is built and the binary package published on the OpenStack Debian package repository.

    Maintaining backports automatically

    The automatic backports is maintained through a Gerrit repository called “deb-auto-backports” containing a “packages-list” file that simply lists source packages we need to backport. On each new CR (change request) in Gerrit, thanks to some madison-lite and dpkg –compare-version magic, the packages-list is used to compare what’s in the Debian archive and what we have in the jessie-newton-backports repository. If the version is lower in our repository, or if the package doesn’t exist, then a build is triggered. There is the possibility to backport from any Debian release (using the -d flag in the “packages-list” file), and even we can use jessie-backports to just rebuild the package. I also had to write a hack to just download from jessie-backports without rebuilding, because rebuilding the webkit2gtk package (needed by sphinx) was taking too resources (though we’ll try to never use it, and rebuild packages when possible).

    The nice thing with this system, is that we don’t need to care much about maintaining packages up-to-date: the script does that for us.

    Upstream Debian repository are NOT for production

    The produced package repositories are there because we have interconnected build dependencies, needed to run unit test at build time. It is the only reason why such Debian repository exist. They are not for production use. If you wish to deploy OpenStack, we very much recommend using packages from distributions (like Debian or Ubuntu). Indeed, the infrastructure Debian repositories are updated multiple times daily. As a result, it is very likely that you will experience failures to download (hash or file size mismatch and such). Also, the functional tests aren’t yet wired in the CI/CD in OpenStack infra, and therefore, we cannot guarantee yet that the packages are usable.

    Improving the build infrastructure

    There’s a bunch of things which we could do to improve the build process. Let me give a list of things we want to do.

    • Get sbuild pre-setup in the Jessie VM images, so we can win 3 minutes per build. This means writing a diskimage-builder element for sbuild.
    • Have the infrastructure use a state-of-the-art Debian ftp-sync mirror, instead of the current reprepro mirroring which produces an unsigned reprository, which we can’t use for sbuild-createchroot. This will improve things a lot, as currently, there’s lots of build failures because of httpredir.debian.org mirror inconsistencies (and these are very frustrating loss of time).
    • For each packaging change, there’s 3 build: the check job, the gate job, and the POST job. This is a loss of time and resources, as we need to build a package once only. It will be hopefully possible to fix this when the OpenStack infra team will deploy Zuul 3.

    Generalizing to Debian

    During Debconf 16, I had very interesting talks with the DSA (Debian System Administrator) about deploying such a CI/CD for the whole of the Debian archive, interfacing Gerrit with something like dgit and a build CI. I was told that I should provide a proof of concept first, which I very much agreed with. Such a PoC is there now, within OpenStack infra. I very much welcome any Debian contributor to try it, through a packaging patch. If you wish to do so, you should read how to contribute to OpenStack here: https://wiki.openstack.org/wiki/How_To_Contribute#If_you.27re_a_developer and then simply send your patch with “git review”.

    This system, however, currently only fits the “git tag” based packaging workflow. We’d have to do a little bit more work to make it possible to use pristine-tar (basically, allow to push in the upstream and pristine-tar branches without any CI job connected to the push).

    Dear DSA team, as we now nice PoC that is working well, on which the OpenStack PKG team is maintaining 100s of packages, shall we try to generalize and provide such infrastructure for every packaging team and DDs?

    Categories: FLOSS Project Planets

    Lintel Technologies: How to create read only attributes and restrict setting attribute values on object in python

    Planet Python - Sun, 2016-10-16 16:23

    There are different way to prevent setting attributes and make attributes read only on object in python. We can use any one of the following way to  make attributes readonly

    1. Property Descriptor
    2. Using descriptor methods __get__ and __set__
    3. Using slots  (only restricts setting arbitary attributes)
    Property Descriptor

    Python ships with built in function called property. We can use this function to customize the way attributes be accessed and assigned.

    First I will explain you about property before I get you idea about how it is useful to make attribute readonly.

    Typical signature of the function property is

     property([fget[, fset[, fdel[, doc]]]]

    As you can see here this function take four arguments, those are

    fget is a function for getting an attribute value. fset is a function for setting an attribute value. fdel is a function for deleting an attribute value. And doc creates a docstring for the attribute.

    All these function are for the sake of single attribute. That is fget function will be called when you access/get the attribute. fset function will be called when you are trying to set the attribute.

    Simple example

    class Foo(object): def __init__(self): self._x = None def getx(self): print "Getting attribute x" return self._x def setx(self, value): print "Setting attribute x" self._x = value def delx(self): print "Deleting attribue x" del self._x x = property(getx, setx, delx, "I'm the 'x' property.")

    Instantiate Foo and try to play the instance attribute x

    >>> i = Foo() >>> i.x Getting attribute x >>> i.x = 3 Setting attribute x >>> i.x Getting attribute x >>> i._x #Still you can access hidden attrib _x Where it is abstracted as x 3 >>> del i.x Deleting attribue x

    I hope, you got what exactly the function property is and how we use it. In many cases we use this property to hide actual attributes and abstract them with another name.

    You can use property as decorator also. Something like

    class Foo(object): def __init__(self): self._x = None @property def x(self): """I'm the 'x' property.""" return self._x @x.setter def x(self, value): self._x = value @x.deleter def x(self): del self._x

    Now let’s come to actual thing how we make attribute readonly.

    It’s simple you just don’t define setter for the property attribute. Let’s see the following example

    class Bank(object): def __init__(self): self._money = 100000 @property def money(self): """Get the money available.""" return self._money

    >>> b = Bank() >>> b.money 100000 >>> b.money = 9000000 Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: can't set attribute >>> >>> >>> del b.money Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: can't delete attribute

    Here, as we didn’t define setter for the property attribute. So python won’t allow setting that specific attribute even you can’t delete if you don’t define fdel. Thus, attribute becomes read only. Still you can access b._money  and you can set that attribute there is no restriction over setting this internal attribute.

    Descriptor methods __get__ and __set__

    These magic methods define descriptor for the object attribute. To get complete understanding and usage about descriptor magic methods, please check other article .

    Like fget and fset functions that property function takes, __get__ is used to define behavior when descriptor’s value is retrieved. __set__ method is used to define behavior  when descriptor value is getting set(assigned). Where __delete__ is used to define behavior when descriptor is getting deleted.

    To restrict setting attribute and make it readonly. You have to use __set__ magic method of descriptor and raise exception in it.

    Let’s see the simple example demonstrating descriptor object and readonly attributes using descriptors

    class Distance(object): """Descriptor for a distance. Distance in meters""" def __init__(self, value=0.0): self.value = float(value) def __get__(self, instance, owner): return self.value def __set__(self, instance, value): self.value = float(value) class Time(object): """Descriptor for a time.""" def __init__(self, value=1.0): self.value = float(value) def __get__(self, instance, owner): return self.value def __set__(self, instance, value): self.value = float(value) class Speed(object): """Descriptor for a speed.""" def __get__(self, instance, owner): speed = instance.distance / instance.time return "%s m/s" % speed def __set__(self, instance, value): ## Restrict setting speed attribute raise AttributeError, "can not set attribute seepd" class Vehicle(object): """ Class to represent vehicle holding three descriptors for speed. Where speed is readonly """ distance = Distance() time = Time() speed = Speed()

    Lets see the result and trying to set speed attribute

    >>> from python_property import Vehicle >>> v = Vehicle() >>> v.distance = 100 >>> v.time = 5 >>> v.speed '20.0 m/s' >>> v.speed = 40 Traceback (most recent call last): File "<stdin>", line 1, in <module> File "python_property.py", line 98, in __set__ raise AttributeError, "can not set attribute seepd" AttributeError: can not set attribute seepd >>>

    As you can see here, we can’t set the attribute speed on instance v of Vehicle. Because we are restricting it in descriptor method __set__ of class Speed

    Python __slots__

    The basic usage of __slots__ is to save space in objects. Instead of having a dynamic dict that allows adding attributes to objects at anytime, there is a static structure which does not allow additions after creation. This will also gain us some performance due to lack of dynamic  attribute assignment. That is, it saves the overhead of one dict for every object that uses slots.

    Think of you are creating lot of (hundreds, thousands) instances from the same class, this could be useful as memory and performance optimization tool.

    If you are using __slots__ means you are defining static attributes on class. This is how we save memory and gain performance as there is not dynamic attribute assignment. Thus you can’t set new attributes on object.

    >>> class Foo(object): ... __slots__ = 'a', 'b' ... >>> i = Foo() >>> i.a = 3 >>> i.c = 3 Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'Foo' object has no attribute 'c' >>>

    You see, in the above example we are not able to set attribute c as it not given in __sots__. Any way it’s about restricting  assignment to new attributes and you can combine either above two methods to make existing attributes readonly.



    [1] __get__ and __set__ data descriptors don’t work on instance attributes http://stackoverflow.com/questions/23309698/why-is-the-descriptor-not-getting-called-when-defined-as-instance-attribute

    [2] http://stackoverflow.com/questions/472000/usage-of-slots

    The post How to create read only attributes and restrict setting attribute values on object in python appeared first on Lintel Technologies Blog.

    Categories: FLOSS Project Planets

    Dirk Eddelbuettel: Rcpp now used by 800 CRAN packages

    Planet Debian - Sun, 2016-10-16 15:42

    A moment ago, Rcpp hit another milestone: 800 packages on CRAN now depend on it (as measured by Depends, Imports and LinkingTo declarations). The graph is on the left depicts the growth of Rcpp usage over time.

    The easiest way to compute this is to use the reverse_dependencies_with_maintainers() function from a helper scripts file on CRAN. This still gets one or false positives of packages declaring a dependency but not actually containing C++ code and the like. There is also a helper function revdep() in the devtools package but it includes Suggests: which does not firmly imply usage, and hence inflates the count. I have always opted for a tighter count with corrections.

    Rcpp cleared 300 packages in November 2014. It passed 400 packages in June of last year (when I only tweeted about it), 500 packages less than a year ago in late October, 600 packages this March and 700 packages this July. The chart extends to the very beginning via manually compiled data from CRANberries and checked with crandb. The next part uses manually saved entries. The core (and by far largest) part of the data set was generated semi-automatically via a short script appending updates to a small file-based backend. A list of packages using Rcpp is kept on this page.

    Also displayed in the graph is the relative proportion of CRAN packages using Rcpp. The four per-cent hurdle was cleared just before useR! 2014 where I showed a similar graph (as two distinct graphs) in my invited talk. We passed five percent in December of 2014, six percent July of last year, seven percent just before Christmas and eight percent this summer.

    800 user packages is staggeringly large and humbling number. This puts more than some responsibility on us in the Rcpp team as we continue to keep Rcpp as performant and reliable as it has been.

    At the rate we are going, the big 1000 may be hit before we all meet again for useR! 2017.

    And with that a very big Thank You! to all users and contributors of Rcpp for help, suggestions, bug reports, documentation or, of course, code.

    This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

    Categories: FLOSS Project Planets

    Weekly Python Chat: Class-Based Views in Django

    Planet Python - Sun, 2016-10-16 13:00

    Most Django programmers use function-based views, but some use class-based views. Why? We're going to talk about how class-based views are different.

    Categories: FLOSS Project Planets

    Lintel Technologies: Python: __new__ magic method explained

    Planet Python - Sun, 2016-10-16 12:16

    Python is Object oriented language, every thing is an object in python. Python is having special type of  methods called magic methods named with preceded and trailing double underscores.

    When we talk about magic method __new__ we also need to talk about __init__

    These methods will be called when you instantiate(The process of creating instance from class is called instantiation). That is when you create instance. The magic method __new__ will be called when instance is being created. Using this method you can customize the instance creation. This is only the method which will be called first then __init__ will be called to initialize instance when you are creating instance.

    Method __new__ will take class reference as the first argument followed by arguments which are passed to constructor(Arguments passed to call of class to create instance). Method __new__ is responsible to create instance, so you can use this method to customize object creation. Typically method __new__ will return the created instance object reference. Method __init__ will be called once __new__ method completed execution.

    You can create new instance of the class by invoking the superclass’s __new__ method using super. Something like super(currentclass, cls).__new__(cls, [,….])

    Usual class declaration and instantiation

    class Foo(object): def __init__(self, a, b): self.a = a self.b = b def bar(self): pass i = Foo(2, 3)

    A class implementation with __new__ method overridden

    class Foo(object): def __new__(cls, *args, **kwargs): print "Creating Instance" instance = super(Foo, cls).__new__(cls, *args, **kwargs) return instance def __init__(self, a, b): self.a = a self.b = b def bar(self): pass


    >>> i = Foo(2, 3) Creating Instance


    You can create instance inside __new__  method either by using super function or by directly calling __new__ method over object  Where if parent class is object. That is,

    instance = super(MyClass, cls).__new__(cls, *args, **kwargs)


    instance = object.__new__(cls, *args, **kwargs)

    Things to remember

    If __new__ return instance of  it’s own class, then the __init__ method of newly created instance will be invoked with instance as first (like __init__(self, [, ….]) argument following by arguments passed to __new__ or call of class.  So, __init__ will called implicitly.

    If __new__ method return something else other than instance of class,  then instances __init__ method will not be invoked. In this case you have to call __init__ method yourself.


    Usually it’s uncommon to override __new__ method, but some times it is required if you are writing APIs or customizing class or instance creation or abstracting something using classes.

    Singleton using __new__

    You can implement the singleton design pattern using __new__ method. Where singleton class is a class that can only have one object. That is, instance of class.

    Here is how you can restrict creating more than one instance by overriding __new__

    class Singleton(object): _instance = None # Keep instance reference def __new__(cls, *args, **kwargs): if not cls._instance: cls._instance = object.__name__(cls, *args, **kwargs) return cls._instance

    It is not limited to singleton. You can also impose limit on total number created instances

    class LimitedInstances(object): _instances = [] # Keep track of instance reference limit = 5 def __new__(cls, *args, **kwargs): if not len(cls._instances) <= cls.limit: raise RuntimeError, "Count not create instance. Limit %s reached" % cls.limit instance = object.__name__(cls, *args, **kwargs) cls._instances.append(instance) return instance def __del__(self): # Remove instance from _instances self._instance.remove(self)


    Customize Instance Object

    You can customize the instance created and make some operations over it before initializer __init__  being called.Also you can impose restriction on instance creation based on some constraints

    ef createInstance(): # Do what ever you want to determie if instance can be created return True class CustomizeInstance(object): def __new__(cls, a, b): if not createInstance(): raise RuntimeError, "Count not create instance" instance = super(CustomizeInstance, cls).__new__(cls, a, b) instance.a = a return instance def __init__(self, a, b): pass


    Customize Returned Object

    Usually when you instantiate class it will return the instance of that class.You can customize this behaviour and you can return some random object you want.

    Following  one is simple example to demonstrate that returning random object other than class instance

    class AbstractClass(object): def __new__(cls, a, b): instance = super(AbstractClass, cls).__new__(cls) instance.__init__(a, b) return 3 def __init__(self, a, b): print "Initializing Instance", a, b


    >>> a = AbstractClass(2, 3) Initializing Instance 2 3 >>> a 3

    Here you can see when we instantiate class it returns  3 instead of instance reference. Because we are returning 3 instead of created instance from __new__ method. We are calling __init__ explicitly.  As I mentioned above, we have to call __init__ explicitly if we are not returning instance object from __new__ method.

    The __new__ method is also used in conjunction with meta classes to customize class creation


    There are many possibilities on how you can use this feature.  Mostly it is not always required to override __new__ method unless you are doing something regarding instance creation.

    Simplicity is better than complexity. Try to make life easier use this method only if it is necessary to use.


    The post Python: __new__ magic method explained appeared first on Lintel Technologies Blog.

    Categories: FLOSS Project Planets

    Call for attendees Lakademy 2017

    Planet KDE - Sun, 2016-10-16 11:32

    Lakademy 2016 Group Photo.

    As many of you know, since 2012 we organize the Lakademy, a sort of Latin American Akademy. The event brings together KDE Latin American contributors in hacking sessions to work on their projects, promo meetings to think KDE dissemination strategies in the region and other activities.

    Every year we make a call for attendees. Anyone can participate, although the event is focused on Latin American contributors, we also want to encourage new people to become contributors and to join the community. So if you live in any country in Latin America and would like to join us at the next event, please complete this form showing your interest. This form will be available until the beginning of November.

    The next Lakademy will take place in the Brazilian city of Belo Horizonte, between April 28 and May 01. Remember that if you need help with the costs of travel and lodging, KDE e.V. can help you with this, but this will depend on several factors such as amount requested, number of participants in the event, how active you are in the community and so on. Do not be shy, we encourage you to apply and join our Latin American community. Maybe you are the next to host the Lakademy in your country. We would love to make an edition of the event in another country in Latin America other than Brazil.

    See you at Lakademy 2017!

    Categories: FLOSS Project Planets

    Steinar H. Gunderson: backup.sh opensourced

    Planet Debian - Sun, 2016-10-16 09:43

    It's been said that backup is a bit like flossing; everybody knows you should do it, but nobody does it.

    If you want to start flossing, an immediate question is what kind of dental floss to get—and conversely, for backup, which backup software do you want to rely on? I had some criteria:

    • Automated full-system backup, not just user files.
    • Self-controlled, not cloud (the cloud economics don't really make sense for 10 TB+ of backup storage, especially when you factor in restore cost).
    • Does not require one file on the backup server for one each file on the backed-up server (makes for infinitely long fscks, greatly increased risk of file system corruption, frequently gives performance problems in the backup host, and makes inter-file compression impossible).
    • Not written in Python (makes for glacial speeds).
    • Pull backups, not push (so a backed-up server cannot delete its own backups in event of a break-in).
    • Does not require any special preparation or lots of installation on each server.
    • Ideally, restore using UNIX standard tools only.

    I looked at basically everything that existed in Debian and then some, and all of them failed. But Samfundet had its own script that's basically just a simple wrapper around tar and ssh, which has worked for 15+ years without a hitch (including several restores), so why not use it?

    All the authors agreed to a GPLv2+ licensing, so now it's time for backup.sh to meet the world. It does about the simplest thing you can imagine: ssh to the server and use GNU tar to tar down every filesystem that has the “dump” bit set in fstab. Every 30 days, it does a full backup; otherwise, it does an incremental backup using GNU tar's incremental mode (which makes sure you will also get information about file deletes). It doesn't do inter-file diffs (so if you have huge files that change only a little bit every day, you'll get blowup), and you can't do single-file restores without basically scanning through all the files; tar isn't random-access. So it doesn't do much fancy, but it works, and it sends you a nice little email every day so you can know your backup went well. (There's also a less frequently used mode where the backed-up server encrypts the backup using GnuPG, so you don't even need to trust the backup server.) It really takes fifteen minutes to set up, so now there's no excuse. :-)

    Oh, and the only good dental floss is this one. :-)

    Categories: FLOSS Project Planets

    R&#233;mi Vanicat: Trying to install Debian on G752VM-GC006T

    Planet Debian - Sun, 2016-10-16 08:13

    I'm trying to install Debian GNU/linux on my new ASUS G752VM-GC006T

    So what I've discovered:

    • It's F2 to have the bios, and in the last bios section, you can directly boot on any device.
    • It boot on the netinst DVD
    • netinst can't see the SSD disk
    • the trackpad doesn't work
    • after successful install, booting on the fresh install failed. I had to use the recovery tools to install nvidia non-free package to have debian successfully boot.
    • I mostly use sid on my computer (mostly to test problem, and report them). It was a bad idea: Debian stopped to find its own disk. adding pci=nomsi to the kernel option fix this.

    So I've a working linux. My problem are:

    • I still can't see the SSD disk from linux
    • I cannot easily dualboot:
      • linux can't see the SSD where windows is,
      • windows boot loader don't want to start Debian, because it doesn't want to,
      • at least, the bios can boot both of them, but there is no "pretty" menu
    • the trackpad is not working.
    • 0.5 To feel small today...

    And the question is: where to report those bug.

    First edit: rEFInd seem to find windows and Debian, thanks to blackcat77

    Categories: FLOSS Project Planets

    A Dev From The Plains: Upgrading Drupal’s Viewport module to Drupal 8 (Part 3 – Final)

    Planet Drupal - Sun, 2016-10-16 05:44

    Image taken from https://drupalize.me

    With parts one and two of the series covered, this 3rd (and final!) part will cover some other important aspects of Drupal development that I wanted to pay attention to while working on the Drupal 8 version of the viewport module.

    Essentially, the kind of aspects that are easy to forget (or ignore on purpose) when a developer comes up with a working version or something “good enough” to be released, and thinks there’s no need to write tests for the existing codebase (we’ve all done it at some point).

    This post will be a mix of links to documentation and resources that were either useful or new to me, and tips about some of the utilities and classes available when writing unit or functoinal tests.


    Since I wrote both functional tests and unit tests for the viewport module, this section is split in two parts, for the sake of clarity and structure.

    - Functional tests

    Before getting into the bits and blobs of functional test classes, I decided read a couple articles on the matter, just to see how much testing had changed in the last year(s) in Drupal 8. This article on the existing types of Drupal 8 tests was a good overview, as well as Acquia’s lesson on unit and functional tests.

    Other than that, there were also a few change notices I went through in order to understand what was being done with the testing framework and why. TL;DR: Drupal runs away from simpletest and moves to PHPUnit to modernise the test infrastructure. That started by adding the PHPUnit test framework to Drupal core. Also, new classes were added that leveraged PHPUnit instead of the existing Simpletest-based classes. Specifically, a new KernelTestBase was added, and also a BrowserTestBase class, replacing the well-known WebTestBase.

    I decided to base all my tests in the PHPUnit classes exclusively, knowing that Simpletest will just die at some point. One of the key requirements for this was to put all test classes in a different path: {module-folder}/tests/src/{test-type}/{test-classes}. Also, the @group annotation was still required, as with Simpletest, so that tests of specific groups or modules can be executed alone, without all the test suite running.

    The first thing I noticed when I started to write the first Functional test class, ViewportPermissionsTest, was that the getInfo() method was no longer needed, since it’s been removed in favor of PHPDoc, which means all the info there is retrieved from the documentation block of the test class.

    With PHPUnit introduced in Drupal 8, a lot of new features have been added to test classes through PHP Annotations. Without getting into much details, two of them that called my attention when trying to debug a test, were the @runTestsInSeparateProcesses and @preserveGlobalState  annotations. Most of the time the defaults will work for you, although there might be cases where you may want to change them. Note that running tests in separate processes is enabled by default, but some performance issues have been reported on this feature for PHPUnit.

    I also came across some issues about the lack of a configuration schema defined for my module, when trying to execute tests in the ViewportTagPrintedOnSelectedPagesTest class, which led me to find another change notice (yeah… too many!) about tests enforcing strict configuration schema adherence by default. The change notice explains how to avoid that (if really necessary), but given that’s not a good practice, you should just add a configuration schema to all your modules, if applicable.

    Some other scattered notes of things I noticed when writing functional tests:

    • drupalPostForm() $edit keys need to be the HTML name of the form field being tested, not the field name as defined in the Form API $form array. Same happens with drupalPost().
    • When working on a test class, $this->assertSession()->responseContains() should be used to check HTML contents returned on a page. The change notice about BrowserTestBase class, points to $this->assertSession()->pageTextContains(), but that one is useful only for actual contents displayed on a page, and not the complete HTML returned to the browser.
    - Unit Tests

    The main difference between unit tests and functional tests (speaking only of structure in the codebase), is that unit tests need to be placed under {module-name}/tests/src/Unit, and they need to extend the UnitTestCase class.

    Running PHPUnit tests just requires executing a simple command from the core directory of a Drupal project, specifying the testsuite or the group of tests to be executed, as shown below:

    php ../vendor/bin/phpunit –group=viewport

    php ../vendor/bin/phpunit –group=viewport –testsuite=unit

    As mentioned in the link above, there has to be a properly configured phpunit.xml file within the core folder. Also, note that Drupal.org’s testbot runs tests in a different way. As detailed in the link above, tests can be executed locally in the same way the bot does, to ensure that the setup will allow a given module to receive automated testing support once contributed to Drupal.org.

    PHPUnit matchers: In PHPUnit parlance, a matcher is ultimately an instance of an object that implements the PHPUnit_Framework_MockObject_Matcher_Invocation interface. In summary, it helps to define the result that would be expected of a method call in an object mock, depending on the arguments passed to the method and the number of times it’s been called. For example:

    $this->pathMatcher->expects($this->any()) ->method('getFrontPagePath') ->will($this->returnValue('/frontpage-path'));

    That’s telling the pathMatcher mock (don’t confuse with a PHPUnit matcher), to return the value “/frontpage-path” whenever the method getFrontPagePath() is called. However, this other snippet:

    $this->pathMatcher->expects($this->exactly(3)) ->method('getFrontPagePath') ->will($this->returnValue('/third-time'));

    It’s telling the mock to return the value “/third-time”, only when the getFrontPagePath() method is executed exactly 3 times.

    will(): As seen in the examples above, the will() method can be used to tell the object mock to return different values on consecutive calls, or to get the return value processed by a callback, or fetch it from a values map.

    These concepts are explained in more detail in the Chapter 9 of the PHPUnit manual.

    Coding Standards and Code Sniffer

    With the module fully working, and tests written for it, the final stage was to run some checks on the coding standards, and ensure everything is according to Drupal style guides. Code Sniffer makes this incredibly easy, and there’s an entire section dedicated to it in the Site Building Guide at Drupal.org, which details how to install it and run it from the command line.

    Finally, and even though I didn’t bother changing the README.txt contents, I also noticed the existence of a README Template file available in the online documentation, handy when contributing new modules. With all the codebase tidied up and proper tests in place, the module was finally good to go.

    That’s it! This ends the series of dev diaries about porting the Viewport module to D8. I hope you enjoyed it! There might be a similar blog post about the port process of the User Homepage module, so stay tuned if you’re interested in it.

    Links and (re)sources

    As usual, I’ll leave a list of the articles and other resources I read while working on the module upgrade.

    • Which D8 test is right for me?: link.
    • Acquia’s lesson on Unit and Functional tests: link.
    • Simpletest Class, File and Namespace structure (D8): link (note Simpletest wasn’t used in the end in the module port).
    • Converting D7 SimpleTests to D8: link.
    • Configuration Schema and Metadata: link.
    • Drupal SimpleTest coding standards: link
    • PHPUnit in Drupal 8: link.
    • Agile Unit Testing: link.
    • Test Doubles in PHPUnit: link.
    • Drupal coding standards: link.
    • README Template file: link.
    Change Records and topic discussions
    Categories: FLOSS Project Planets
    Syndicate content