Feeds

Four Habit-Forming Tips to Faster C++

Planet KDE - Thu, 2016-07-21 10:46

Are you a victim of premature pessimisation? Here’s a short definition from Herb Sutter:

Premature pessimization is when you write code that is slower than it needs to be, usually by asking for unnecessary extra work, when equivalently complex code would be faster and should just naturally flow out of your fingers.

Despite how amazing today’s compilers have become at generating code, humans still know more about the intended use of a function or class than can be specified by mere syntax. Compilers operate under a host of very strict rules that enforce correctness at the expense of faster code. What’s more, modern processor architectures sometimes compete with C++ language habits that have become ingrained in programmers from decades of previous best practice.

I believe that if you want to improve the speed of your code, you need to adopt habits that take advantage of modern compilers and modern processor architectures—habits that will help your compiler generate the best-possible code. Habits that, if you follow them, will generate faster code before you even start the optimisation process.

Here’s four habit-forming tips that are all about avoiding pessimisation and, in my experience, go a long way to creating faster C++ classes.

1) Make use of the (named-) return-value optimisation

According to Lawrence Crowl, (named-) return-value optimisation ((N)RVO) is one of the most important optimisations in modern C++. Okay—what is it?

Let’s start with plain return-value optimization (RVO). Normally, when a C++ method returns an unnamed object, the compiler creates a temporary object, which is then copy-constructed into the target object.

MyData myFunction() { return MyData(); // Create and return unnamed obj } MyData abc = myFunction();

With RVO, the C++ standard allows the compiler to skip the creation of the temporary, treating both object instances—the one inside the function and the one assigned to the variable outside the function—as the same. This usually goes under the name of copy elision. But what is elided here is the temporary and the copy.

So, not only do you save the copy constructor call, you also save the destructor call, as well as some stack memory. Clearly, elimination of extra calls and temporaries saves time and space, but crucially, RVO is an enabler for pass-by-value designs. Imagine MyData was a large million-by-million matrix. There mere chance that some target compiler could fail to implement this optimisation would make every good programmer shy away from return-by-value and resort to out parameters instead (more on those further down).

As an aside: don’t C++ Move Semantics solve this? The answer is: no. If you move instead of copy, you still have the temporary and its destructor call in the executable code. And if your matrix is not heap-allocated, but statically sized, such as a std::array<std::array<double, 1000>, 1000>>, moving is the same as copying. With RVO, you mustn’t be afraid of returning by value. You must unlearn what you have learned and embrace return-by-value.

Named Return Value Optimization is similar but it allows the compiler to eliminate not just rvalues (temporaries), but lvalues (local variables), too, under certain conditions.

What all compilers these days (and for some time now) reliably implement is NRVO in the case where there is a single variable that is passed to every return, and declared at function scope as the first variable:

MyData myFunction() { MyData result; // Declare return val in ONE place if (doing_something) { return result; // Return same val everywhere } // Doing something else return result; // Return same val everywhere } MyData abc = myFunction();

Sadly, many compilers, including GCC, fail to apply NRVO when you deviate even slightly from the basic pattern:

MyData myFunction() { if (doing_something) return MyData(); // RVO expected MyData result; // ... return result; // NRVO expected } MyData abc = myFunction();

At least GCC fails to use NRVO for the second return statement in that function. The fix in this case is easy (go back to the first version), but it’s not always that easy. It is an altogether sad state of affairs for a language that is said to have the most advanced optimisers available to it for compilers to fail to implement this very basic optimisation.

So, for the time being, get your fingers accustomed to typing the classical NRVO pattern: it enables the compiler to generate code that does what you want in the most efficient way enabled by the C++ standard.

If diving into assembly code to check whether a particular patterns makes your compiler drop NRVO isn’t your thing, Thomas Brown provides a very comprehensive list of compilers tested for their NRVO support and I’ve extended Brown’s work with some additional results.

If you start using the NVRO pattern but aren’t getting the results you expect, your compiler may not automatically perform NRVO transformations. You may need to check your compiler optimization settings and explicitly enable them.

Return parameters by value whenever possible

This is pretty simple: don’t use “out-parameters”. The result for the caller is certainly kinder: we just return our value instead of having the caller allocate a variable and pass in a reference. Even if your function returns multiple results, nearly all of the time you’re much better off creating a small result struct that the function passes back (via (N)RVO!):

That is, instead of this:

void convertToFraction(double val, int &numerator, int &denominator) { numerator = /*calculation */ ; denominator = /*calculation */ ; } int numerator, denominator; convertToFraction(val, numerator, denominator); // or was it "denominator, nominator"? use(numerator); use(denominator);

You should prefer this:

struct fractional_parts { int numerator; int denominator; }; fractional_parts convertToFraction(double val) { int numerator = /*calculation */ ; int denominator = /*calculation */ ; return {numerator, denominator}; // C++11 braced initialisation -> RVO } auto parts = convertToFraction(val); use(parts.nominator); use(parts.denominator);

This may seem surprising, even counter-intuitive, for programmers that cut their teeth on older x86 architectures. You’re just passing around a pointer instead of a big chunk of data, right? Quite simply, “out” parameter pointers force a modern compiler to avoid certain optimisations when calling non-inlined functions. Because the compiler can’t always determine if the function call may change an underlying value (due to aliasing), it can’t beneficially keep the value in a CPU register or reorder instructions around it. Besides, compilers have gotten pretty smart—they don’t actually do expensive value passing unless they need to (see the next tip). With 64-bit and even 32-bit CPUs, small structs can be packed into registers or automatically allocated on the stack as needed by the compiler. Returning results by value allows the compiler to understand that there isn’t any modification or aliasing happening to your parameters, and you and your callers get to write simpler code.

3) Cache member-variables and reference-parameters

This rule is straightforward: take a copy of the member-variables or reference-parameters you are going to use within your function at the top of the function, instead of using them directly throughout the method. There are two good reasons for this.

The first is the same as the tip above—because pointer references (even member-variables in methods, as they’re accessed through the implicit this pointer) put a stick in the wheels of the compiler’s optimisation. The compiler can’t guarantee that things don’t change outside its view, so it takes a very conservative (and in most cases wasteful) approach and throws away any state information it may have gleaned about those variables each time they’re used anew. And that’s valuable information that can help the compiler eliminate instructions and references to memory.

Another important reason is correctness. As an example provided by Lawrence Crowl in his CppCon 2014 talk “The Implementation of Value Types”, instead of this complex number multiplication:

template <class T> complex<T> &complex<T>;::operator*=(const complex<T> &a) { real = real * a.real – imag * a.imag; imag = real * a.imag + imag * a.real; return *this; }

You should prefer this version:

template <class T> complex<T> &complex<T>;::operator*=(const complex<T> &a) { T a_real = a.real, a_imag = a.imag; T t_real = real, t_imag = imag; // t == this real = t_real * a_real – t_imag * a_imag; imag = t_real * a_imag + t_imag * a_real; return *this; }

This second, non-aliased version will still work properly if you use value *= value to square a number; the first one won’t give you the right value because it doesn’t protect against aliased variables.

To summarise succinctly: read from (and write to!) each non-local variable exactly once in every function.

4) Organize your member variables intelligently

Is it better to organize member variables for readability or for the compiler? Ideally, you pick a scheme that works for both.

And now is a perfect time for a short refresher about CPU caches. Of course data coming from memory is very slow compared to data coming from a cache. An important fact to remember is that data is loaded into the cache in (typically) 64-byte blocks called cache lines. The cache line—that is, your requested data and the 64 bytes surrounding it—is loaded on your first request for memory absent in the cache. Because every cache miss silently penalises your program, you want a well-considered strategy for ensuring you reduce cache misses whenever possible. Even if the first memory access is outside the cache, trying to structure your accesses so that a second, third, or forth access is within the cache will have a significant impact on speed. With that in mind, consider these tips for your member-variable declarations:

  • Move the most-frequently-used member-variables first
  • Move the least-frequently-used member-variables last
  • If variables are often used together, group them near each other
  • Try to reference variables in your functions in the order they’re declared

Nearly all C++ compilers organize member variables in memory in the order in which they are declared. And grouping your member variables using the above guidelines can help reduce cache misses that drastically impact performance. Although compilers can be smart about creating code that works with caching strategies in a way that’s hard for humans to track, the C++ rules on class layout make it hard for compilers to really shine. Your goal here is to help the compiler by stacking the deck on cache-line loads that will preferentially load the variables in the order you’ll need them.

This can be a tough one if you’re not sure how frequently things are used. While it’s not always easy for complicated classes to know what member variables may be touched more often, generally following this rule of thumb as well as you can will help. Certainly for the simpler classes (string, dates/times, points, complex, quaternions, etc) you’ll probably be accessing most member variables most of the time, but you can still declare and access your member variables in a consistent way that will help guarantee that you’re minimizing your cache misses.

Conclusion

The bottomline is that it still takes some amount of hand-holding to get a compiler to generate the best code. Good coding-habits are by no means the end-all, but are certainly a great place to start.

The post Four Habit-Forming Tips to Faster C++ appeared first on KDAB.

Categories: FLOSS Project Planets

PyCharm: Announcing General Availability of PyCharm 2016.2

Planet Python - Thu, 2016-07-21 10:06

Today we bring you PyCharm 2016.2, now available for download. This is the second update in the series of releases planned for 2016. Its outstanding new features for professional Python, Web and scientific development work together smoothly to offer you a unique coding experience.

As usual, PyCharm 2016.2 is available as a full-featured Professional Edition for Python and Web development, or as a free and open-source Community Edition for pure Python and scientific development.

Here are some notable highlights of this release.

Python-related improvements:

  • vmprof Profiler Support
  • Pandas dataframes viewer
  • Thread suspend option
  • Function return values in the debugger
  • Improvements for package installation from requirements.txt
  • Configuration for optimize imports
  • Enhanced postfix code completion
  • Lettuce scenario outlines

Platform enhancements:

  • Support for ligatures
  • Improved inspection tool
  • Custom background image for the editor
  • Regex support improvement
  • Handling of unversioned files
  • Improvements in working with patches
  • Enhanced VCS Log Viewer
  • Database tool improvements
  • And even more

For more details please watch this short What’s New in PyCharm 2016.2 video:

Read more about what’s new in PyCharm 2016.2 on the product website and download the IDE for your platform.

Your JetBrains Team
The Drive to Develop

Categories: FLOSS Project Planets

Import Python: ImportPython Issue 82

Planet Python - Thu, 2016-07-21 09:40

Worthy Read
Python has come a long way. So has job hunting.
Try Hired and get in front of 4,000+ companies with one application. No more pushy recruiters, no more dead end applications and mismatched companies, Hired puts the power in your hands.Sponsor
Machine Learning over 1M hotel reviews finds interesting insightsOn this tutorial we learned how to scrape millions of reviews, analyze them with pre-trained classifiers within MonkeyLearn, indexed the results with Elasticsearch and visualize them using Kibana. Machine learning makes sense when you want to analyze big volumes of data in a cost effective way. The code repository is here - https://github.com/monkeylearn/hotel-review-analysis

Mike Driscoll: Python 201: An Intro to mockThe unittest module now includes a mock submodule as of Python 3.3. It will allow you to replace portions of the system that you are testing with mock objects as well as make assertions about how they were used. A mock object is used for simulating system resources that aren’t available in your test environment. In other words, you will find times when you want to test some part of your code in isolation from the rest of it or you will need to test some code in isolation from outside services.

Altair: Declarative statistical visualization library for Python, based on Vega-Litepep8Altair is a declarative statistical visualization library for Python.

7 Django Development Best Practices Each Web Developer Must KnowdjangoSet up Persistent Database Connections, Turn Cached Loading on, Store the Sessions in Cache, Keep the Application and Libraries Separate, Store All Templates in One Place, Install HTML5 Boilerplate, Monitor and Control Processes using Supervisor.

DSF Code of Conduct committee releases transparent documentationcommunityToday we're proud to open source the documentation that describes how the Django Code of Conduct committee enforces our Code of Conduct. This documentation covers the structure of Code of Conduct committee membership, the process of handling Code of Conduct violations, our decision making process, record keeping, and transparency.

Why are some functions in python spelled with underscore, while some are not: setdefault, makedirs, isinstance?discussionI always wondered that. Here is a reddit discussion on the same.

Teaching an AI to write Python code with Python codeAIThis post is about creating a machine that writes its own code. More or less. Introducing GlaDoS Skynet Spynet. More specifically, we are going to train a character level Long Short Term Memory neural network to write code itself by feeding it Python source code. The training will run on a GPU instance on EC2, using Theano and Lasagne. If some of the words here sound obscure to you, I will do my best to explain what is happening.

Writing an API with Flask-RESTfulRESTThis article will go over the details of how to create a RESTful API with Flask and Flask-RESTful. In Part 1 we will go over the API basics and how to implement a simple API. In Part 2 we will expand into advanced use cases powered by Flask-RESTful. All code that will be show is readily available on this repository.

SciPy 2016 videos are upvideoRunning Python Apps in the Browser by Almar Klein was a pretty interesting talk for me. See what interest you. Youtube channel.

How to Create a Custom Django MiddlewaredjangoIn a nutshell, a Middleware is a regular Python class that hooks into Django’s request/response life cycle. Those classes holds pieces of code that are processed upon every request/response your Django application handles.

Mike Driscoll: An Intro to coverage.pycoverageCoverage.py is a 3rd party tool for Python that is used for measuring your code coverage. It was originally created by Ned Batchelder. The term “coverage” in programming circles is typically used to describe the effectiveness of your tests and how much of your code is actually covered by tests. You can use coverage.py with Python 2.6 up to the current version of Python 3 as well as with PyPy.

Ajax Website Tutorial with DjangodjangoIn this tutorial we'll see a trivial example of how to do a ajax website with django. Good for students looking to learn the basics of Django/Ajax and see how it works.

Check out the Python & Django channels available on Gitter.communityGitter is like slack for developers. They have active Python, Django channels. Have a look.

Introduction to Zipline in PythonPython has emerged as one of the most popular language for programmers in financial trading, due to its ease of availability, user-friendliness and presence of sufficient scientific libraries like Pandas, NumPy, PyAlgoTrade, Pybacktest and more. Zipline is a Python library for trading applications that powers the Quantopian service mentioned above. It is an event-driven system that supports both backtesting and live-trading. In this article we will learn how to install Zipline and then how to implement Moving Average Crossover strategy and calculate P&L, Portfolio value etc.



Upcoming Conference / User Group Meet
PyCon Australia 2016
PyCon APAC 2016
EuroScipy 2016
PyCon MY 2016
Python Unconference 2016
Kiwi PyCon
PyCon ZA 2016

Projects
PokemonGo-DesktopMap - 204 Stars, 36 ForkElectron App around PokemonGo-Map
PokemonGo-Map - 128 Stars, 55 ForkLive visualization of all the pokemon in your area
asyncpg - 69 Stars, 2 ForkA fast PostgreSQL Database Client Library for Python/asyncio
choronzon - 46 Stars, 16 ForkAn evolutionary knowledge-based fuzzer
zhihu-terminal - 42 Stars, 2 Forkzhihu-terminal using python2.7.
awesome-wagtail - 14 Stars, 1 ForkA curated list of awesome packages, articles, and other cool resources from the Wagtail community.
reddit_get_top_images - 10 Stars, 1 ForkGet top images from any subreddit
aiosmtpd - 6 Stars, 1 ForkA reimplementation of the Python stdlib smtpd.py based on asyncio.
delft - 6 Stars, 1 ForkA Python tool that automatically optimizes deep learning pipelines using genetic programming.
Categories: FLOSS Project Planets

Reproducible builds folks: Reproducible builds: week 62 in Stretch cycle

Planet Debian - Thu, 2016-07-21 09:13

What happened in the Reproducible Builds effort between June 26th and July 2nd 2016:

Read on to find out why we're lagging some weeks behind…!

GSoC and Outreachy updates
  • Ceridwen described using autopkgtest code to communicate with containers and how to test the container handling.

  • reprotest 0.1 has been accepted into Debian unstable, and any user reports, bug reports, feature requests, etc. would be appreciated. This is still an alpha release, and nothing is set in stone.

Toolchain fixes
  • Matthias Klose uploaded doxygen/1.8.11-3 to Debian unstable (closing #792201) with the upstream patch improving SOURCE_DATE_EPOCH support by using UTC as timezone when parsing the value. This was the last patch we were carrying in our repository, thus this upload obsoletes the version in our experimental repository.
  • cmake/3.5.2-2 was uploaded by Felix Geyer, which sorts file lists obtained with file(GLOB).
  • Dmitry Shachnev uploaded sphinx/1.4.4-2, which fixes a timezone related issue when SOURCE_DATE_EPOCH is set.

With the doxygen upload we are now down to only 2 modified packages in our repository: dpkg and rdfind.

Weekly reports delay and the future of statistics

To catch up with our backlog of weekly reports we have decided to skip some of the statistics for this week. We might publish them in a future report, or we might switch to a format where we summarize them more (and which we can create (even) more automatically), we'll see.

We are doing these weekly statistics because we believe it's appropriate and useful to credit people's work and make it more visible. What do you think? We would love to hear your thoughts on this matter! Do you read these statistics? Somewhat?

Actually, thanks to the power of notmuch, Holger came up with what you can see below, so what's missing for this week are the uploads fixing irreprodubilities. Which we really would like to show for the reasons stated above and because we really really need these uploads to happen

But then we also like to confirm the bugs are really gone, which (atm) requires manual checking, and to look for the words "reproducible" and "deterministic" (and spelling variations) in debian/changelogs of all uploads, to spot reproducible work not tracked via the BTS.

And we still need to catch up on the backlog of weekly reports.

Bugs submitted with reproducible usertags

It seems DebCamp in Cape Town was hugely successful and made some people get a lot of work done:

61 bugs have been filed with reproducible builds usertags and 60 of them had patches:

Package uploads, fixing one or more reproducible issues

FIXME:

misc.git/reports/bin/review-uploads give back a list of uploads, which is correct, except soundscaperenderer and reprotest are not relevant:

this is the list:

Bas Couwenberg ( pdl 1:2.016-3 (source amd64) into unstable James Cowgill ( brainparty 0.61+dfsg-3 (source) into unstable Sascha Steinbis ( genometools 1.5.8+ds-4 (source all amd64) into unstable intrigeri ( libmemcached-libmemcached-perl 1.001801+dfsg-2 (source) into unstable intrigeri ( libextutils-parsexs-perl 3.300000-2 (source) into unstable Dr. Tobias Quat ( aspell-en 2016.06.26-0-0.1 (source all) into unstable Elimar Riesebie ( mailfilter 0.8.4-2 (source) into unstable ChangZhuo Chen ( hime 0.9.10+git20150916+dfsg1-8 (source amd64 all) into unstable Simon McVittie ( yquake2 5.34~dfsg1-1 (source) into unstable Scott Kitterman ( opendkim 2.11.0~alpha-3 (source amd64) into experimental Axel Beckert ( dpmb 0~2016.06.30 (source all) into unstable intrigeri ( libur-perl 0.440-3 (source) into unstable intrigeri ( latexdiff 1.1.1-2 (source) into unstable ChangZhuo Chen ( hime 0.9.10+git20150916+dfsg1-6 (source amd64 all) into unstable Georges Khaznad ( previsat 3.5.1.7+dfsg1-2 (source amd64) into unstable Julian Andres K ( ndiswrapper 1.60-2 (source) into unstable Orestis Ioannou ( cloc 1.68-1.1 (source) into unstable Markus Koschany ( lordsawar 0.3.0-3 (source) into unstable Nicolas Braud-S ( syncthing 0.13.9+dfsg1-2 (source all amd64) into unstable Eric Heintzmann ( gnustep-base 1.24.9-2 (source all amd64) into unstable Daniel Kahn Gil ( gnupg2 2.1.13-3 (source) into experimental ChangZhuo Chen ( gcin 2.8.4+dfsg1-7 (source amd64 all) into unstable Simon McVittie ( openarena-textures 0.8.5split-8 (source) into unstable Simon McVittie ( openarena-players-mature 0.8.5split-8 (source) into unstable Simon McVittie ( openarena-players 0.8.5split-8 (source) into unstable Gianfranco Cost ( libsdl2-gfx 1.0.1+dfsg-4 (source) into unstable Felix Geyer ( cmake 3.5.2-2 (source) into unstable ChangZhuo Chen ( pacapt 2.3.8-2 (source all) into unstable Simon McVittie ( openarena-oacmp1 3-2 (source) into unstable Simon McVittie ( openarena-misc 0.8.5split-8 (source) into unstable Simon McVittie ( openarena-maps 0.8.5split-8 (source) into unstable Simon McVittie ( openarena-data 0.8.5split-8 (source) into unstable Simon McVittie ( openarena-088-data 0.8.8-6 (source) into unstable Simon McVittie ( openarena-085-data 0.8.5split-8 (source) into unstable Simon McVittie ( ostree 2016.6-2 (source) into unstable Simon McVittie ( flatpak 0.6.6-2 (source) into unstable Vagrant Cascadi ( u-boot 2016.03+dfsg1-6 (source) into unstable intrigeri ( libwx-perl 1:0.9928-1 (source) into unstable intrigeri ( libur-perl 0.440-2 (source) into unstable Simon McVittie ( openarena 0.8.8-16 (source) into unstable Simon McVittie ( ioquake3 1.36+u20160616+dfsg1-1 (source) into unstable Matthias Klose ( doxygen 1.8.11-3 (source amd64 all) into unstable Al Stone ( libbrahe 1.3.2-6 (source amd64) into unstable Clint Adams ( libmsv 1.1-2 (source) into unstable Sébastien Ville ( slicot 5.0+20101122-3 (source) into unstable Martin Pitt ( media-player-info 22-3 (source all) into unstable intrigeri ( libglib-perl 3:1.321-1 (source) into unstable Sébastien Ville ( lapack 3.6.1-1 (source) into unstable intrigeri ( libmarpa-r2-perl 2.086000~dfsg-6 (source) into unstable intrigeri ( libgtk2-perl 2:1.2498-2 (source) into unstable intrigeri ( libgnome2-perl 1.046-3 (source) into unstable gregor herrmann ( libnet-tclink-perl 3.4.0-9 (source) into unstable gregor herrmann ( libembperl-perl 2.5.0-7 (source) into unstable

Package reviews

437 new reviews have been added (though most of them were just linking the bug, "only" 56 new issues in packages were found), an unknown number has been been updated and 60 have been removed in this week, adding to our knowledge about identified issues.

4 new issue types have been found:

Weekly QA work

98 FTBFS bugs have been reported by Chris Lamb and Santiago Vila.

diffoscope development strip-nondeterminism development
  • Chris Lamb made sure that .zhfst files are treated as ZIP files.
tests.reproducible-builds.org
  • Mattia Rizzolo uploaded pbuilder/0.225.1~bpo8+1 to jessie-backports and it has been installed on all build nodes. As a consequence all armhf and i386 builds will be done with eatmydata; this will hopefully cut down the build time by a noticable factor.
Misc.

This week's edition was written by Mattia Rizzolo, Reiner Herrmann, Ceridwen and Holger Levsen and reviewed by a bunch of Reproducible builds folks on IRC.

Categories: FLOSS Project Planets

Mike Driscoll: Python 201 Writing Update: Only 4 Chapters to go!

Planet Python - Thu, 2016-07-21 09:04

I finished up section #4 earlier this week which brings the book up to 26 chapters and a little over 200 pages. I have four more chapters planned and then a couple of updates to previous chapters. My goal is to have the book ready for proofing at the end of the month. Then I’ll create a sample print of the book and check it over for errors.

If anyone has been reading the book and found any errors, please let me know. I’ll be finalizing the chapters in mid-August or so and would like them to be as good as they can be before then.

Thanks so much for your support!
Mike

P.S. If you’d like to purchase the early version of the book, you can do at Gumroad or Leanpub

Categories: FLOSS Project Planets

Drupalize.Me: Why Is Learning Drupal Hard?

Planet Drupal - Thu, 2016-07-21 09:00

When it comes to learning Drupal I have a theory that there's an inverse relationship between the scope of knowledge that you need to understand during each phase of the learning process, and the density of available resources that can teach it to you. Accepting this, and understanding how to get through the dip, is an important part of learning Drupal.

Categories: FLOSS Project Planets

Chris Lamb: Python quirk: Signatures are evaluated at import time

Planet Debian - Thu, 2016-07-21 07:07

Every Python programmer knows to avoid mutable default arguments:

def fn(mutable=[]): mutable.append('elem') print mutable fn() fn() $ python test.py ['elem'] ['elem', 'elem']

However, many are not clear that this is due to arguments being evaluated at import time, rather than the first time the function is evaluated.

This results in related quirks such as:

def never_called(error=1/0): pass $ python test.py Traceback (most recent call last): File "test.py", line 1, in <module> ZeroDivisionError: integer division or modulo by zero

... and an—implementation-specific—quirk caused by naive constant folding:

def never_called(): 99999999 ** 9999999 $ python test.py [hangs]

I suspect that this can be used as denial-of-service vector.

Categories: FLOSS Project Planets

GSoC Update: Tinkering with KIO

Planet KDE - Thu, 2016-07-21 05:48
I'm a lot closer to finishing the project now. Thanks to some great support from my GSoC mentor, my project has turned out better than what I had written about in my proposal! Working together, we've made a lot of changes to the project.

For starters, we've changed the name of the ioslave from "File Tray" to "staging" to "stash". I wasn't a big fan of the name change, but I see the utility in shaving off a couple of characters in the name of what I hope will be a widely used feature.

Secondly, the ioslave is now completely independent from Dolphin, or any KIO application for that matter. This means it works exactly the same way across the entire suite of KIO apps. Given that at one point we were planning to make the ioslave fully functional only with Dolphin, this is a major plus point for the project.

Next, the backend for storing stashed files and folders has undergone a complete overhaul. The first iteration of the project stored files and folders by saving the URLs of stashed items in a QList in a custom "stash" daemon running on top of kded5. Although this was a neat little solution which worked well for most intents and purposes, it had some disadvantages. For one, you couldn't delete and move files around on the ioslave without affecting the source because they were all linked to their original directories. Moreover, with the way 'mkdir' works in KIO, this solution would never work without each application being specially configured to use the ioslave which would entail a lot of groundwork laying out QDBus calls to the stash daemon. With these problems looming large, somewhere around the midterm evaluation week, I got a message from my mentor about ramping up the project using a "StashFileSystem", a virtual file system in Qt that he had written just for this project.

The virtual file system is a clever way to approach this - as it solved both of the problems with the previous approach right off the bat - mkdir could be mapped to virtual directory and now making volatile edits to folders is possible without touching the source directory. It did have its drawbacks too - as it needed to stage every file in the source directory, it would require a lot more memory than the previous approach. Plus, it would still be at the whims of kded5 if a contained process went bad and crashed the daemon.

Nevertheless, the benefits in this case far outweighed the potential cons and I got to implementing it in my ioslave and stash daemon. Using this virtual file system also meant remapping all the SlaveBase functions to corresponding calls to the stash daemon which was a complete rewrite of my code. For instance, my GitHub log for the week of implementing the virtual file system showed a sombre 449++/419--. This isn't to say it wasn't productive though - to my surprise the virtual file system actually worked better than I hoped it would! Memory utilisation is low at a nominal ~300 bytes per stashed file and the performance in my manual testing has been looking pretty good.

With the ioslave and other modules of the application largely completed, the current phase of the project involves integrating the feature neatly with Dolphin and for writing a couple of unit tests along the way. I'm looking forward to a good finish with this project.

You can find the source for it here: https://github.com/KDE/kio-stash (did I mention it's now hosted on a KDE repo? ;) )
Categories: FLOSS Project Planets

Programmation Qt Quick (QML)

Planet KDE - Thu, 2016-07-21 05:39

Paris, France 2016-08-22 2016-08-26 Paris, le 22 – 26 Août

En août offrez-vous une formation Qt en français avec un expert.

Apprenez les techniques de développement d’applications graphiques modernes, en utilisant la technologie Qt Quick (basée sur le langage QML) ainsi que la technologie objet Qt/C++.

“Mon équipe C++ a été ravie de cette formation. J‘espère pouvoir implémenter Qt dans nos applis ASAP.” CGG Veritas, Massy, France

Découvrir plus!

Voyez autres retours clients.

Enregistrez-vous

The post Programmation Qt Quick (QML) appeared first on KDAB.

Categories: FLOSS Project Planets

Codementor: User-Defined Functions in Python

Planet Python - Thu, 2016-07-21 04:57

Functions are common to all programming languages and it can be defined as a block of re-usable code to perform specific tasks. But defining functions in Python means knowing both types first—built-in and user-defined. Built-in functions are usually a part of Python packages and libraries, whereas user-defined functions are written by the developers to meet certain requirements.  In Python, all functions are treated as objects, so it is more flexible compared to other high-level languages.

In this article, we will focus on the user-defined functions in Python. To completely understand the concept, we will learn how they can be implemented by writing code examples. Let’s have a look at other important concepts before jumping into coding.

Importance of user-defined functions in Python

In general, developers can write user-defined functions or it can be borrowed as a third-party library. This also means your own user-defined functions can also be a third-party library for other users. User-defined functions have certain advantages depending when and how they are used. Let ‘s have a look at the following points.

  • User-defined functions are reusable code blocks; they only need to be written once, then they can be used multiple times. They can even be used in other applications, too.
  • These functions are very useful, from writing common utilities to specific business logic. These functions can also be modified per requirement.
  • The code is usually well organized, easy to maintain, and developer-friendly. Which means it can support the modular design approach.
  • As user-defined functions can be written independently, the tasks of a project can be distributed for rapid application development.
  • A well-defined and thoughtfully written user-defined function can ease the application development process.

Now that we have a basic understanding of the advantages, let’s have a look at different function arguments in Python.

Function arguments in Python

In Python, user-defined functions can take four different types of arguments. The argument types and their meanings, however, are pre-defined and can’t be changed. But a developer can, instead,  follow these pre-defined rules to make their own custom functions. The following are the four types of arguments and their rules.

1. Default arguments:

Python has a different way of representing syntax and default values for function arguments. Default values indicate that the function argument will take that value if no argument value is passed during function call. The default value is assigned by using assignment (=) operator. Below is a typical syntax for default argument. Here, msg parameter has a default value Hello!.

  • Function definition def defaultArg( name, msg = "Hello!"):
  • Function call defaultArg( name)
2. Required arguments:

Required arguments are the mandatory arguments of a function. These argument values must be passed in correct number and order during function call. Below is a typical syntax for a required argument function.

  • Function definition def requiredArg (str,num):
  • Function call requiredArg ("Hello",12)
3. Keyword arguments:

Keyword arguments are relevant for Python function calls. The keywords are mentioned during the function call along with their corresponding values. These keywords are mapped with the function arguments so the function can easily identify the corresponding values even if the order is not maintained during the function call. The following is the syntax for keyword arguments.

  • Function definition def keywordArg( name, role ):
  • Function call keywordArg( name = "Tom", role = "Manager")

    or

    keywordArg( role = "Manager", name = "Tom")
4. Variable number of arguments:

This is very useful when we do not know the exact number of arguments that will be passed to a function. Or we can have a design where any number of arguments can be passed based on the requirement. Below is the syntax for this type of function call.

  • Function definition def varlengthArgs(*varargs):
  • Function call varlengthArgs(30,40,50,60)

Now that we have an idea about the different argument types in Python. Let’s check the steps to write a user-defined function.

Writing user-defined functions in Python

These are the basic steps in writing user-defined functions in Python. For additional functionalities, we need to incorporate more steps as needed.

  • Step 1: Declare the function with the keyword def followed by the function name.
  • Step 2: Write the arguments inside the opening and closing parentheses of the function, and end the declaration with a colon.
  • Step 3: Add the program statements to be executed.
  • Step 4: End the function with/without return statement.

The example below is a typical syntax for defining functions:

def userDefFunction (arg1, arg2, arg3 ...): program statement1 program statement3 program statement3 .... return; Let’s try some code examples

In this section, we will cover four different examples for all the four types of function arguments.

Default arguments example

The following code snippet represents a default argument example. We have written the code in a script file named defArg.py

Listing 1: Default argument example

def defArgFunc( empname, emprole = "Manager" ): print ("Emp Name: ", empname) print ("Emp Role ", emprole) return; print("Using default value") defArgFunc(empname="Nick") print("************************") print("Overwriting default value") defArgFunc(empname="Tom",emprole = "CEO")

Now run the script file as shown below. It will display the following output:

Required arguments example

The code snippet below represents a required argument example. We have written the code in a script file named reqArg.py

Listing 2: Required argument example

def reqArgFunc( empname): print ("Emp Name: ", empname) return; print("Not passing required arg value") reqArgFunc() print("Now passing required arg value") reqArgFunc("Hello")

Now, first run the code without passing the required argument and the following output will be displayed:

Now comment out reqArgFunc() function call in the script, and run the code with the required argument. The following output will be displayed:

Keyword arguments example

Below is an example of keyword argumentcode snippet. We have written the code in a script file named keyArg.py

Listing 3: Keyword argument example

def keyArgFunc(empname, emprole): print ("Emp Name: ", empname) print ("Emp Role: ", emprole) return; print("Calling in proper sequence") keyArgFunc(empname = "Nick",emprole = "Manager" ) print("Calling in opposite sequence") keyArgFunc(emprole = "Manager",empname = "Nick")

Now run the script file as shown below. It will display the following output:

Variable number of arguments example

The code snippet below shows an example of a variable length argument. We have written the code in a script file named varArg.py

Listing 4: Variable length argument example

def varLenArgFunc(*varvallist ): print ("The Output is: ") for varval in varvallist: print (varval) return; print("Calling with single value") varLenArgFunc(55) print("Calling with multiple values") varLenArgFunc(50,60,70,80)

Once you run the code, the following output will be displayed:

Conclusion

In this article, we have discussed the different aspects of user-defined functions in Python. We have also explored how user-defined functions can be written in simple steps.

These are basic concepts that every Python developer should always keep in mind even as a beginner or as an expert. Learning Python can get tricky but staying true to the basics will help you master the language better.

 

Author’s Bio:

Kaushik Pal has more than 16 years of experience as a technical architect and software consultant in enterprise application and product development. He has interest in new technology and innovation, along with technical writing. His main focus is web architecture, web technologies, Java/J2EE, Open source, big data, cloud, and mobile technologies.You can find more of his work at www.techalpine.com and you can email him at techalpineit@gmail.com OR kaushikkpal@gmail.com

Categories: FLOSS Project Planets

FFW Agency: The Power of Extending Twig Templates

Planet Drupal - Thu, 2016-07-21 03:06
The Power of Extending Twig Templates David Hernandez Thu, 07/21/2016 - 07:06

Extending in Twig is a method by which one template can inherit content from another template, while still being able to override parts of that content. This relationship is easy to imagine if you are familiar with Drupal’s default system of template inheritance.

A theme can have multiple page templates, many node templates, even more field templates, and a plethora of block and Views template. And it is common for those templates to largely be identical, save for a snippet of markup or some logic. The advantage in extending templates is reducing this duplication, thereby simplifying architecture and easing maintenance.

Let’s say, for example, you want to change the template for a specific block, adding a wrapper div around the main content area. This might be done by copying the standard block template and giving it a name specific to your block.

Classy’s block.html.twig template
{%
  set classes = [
    'block',
    'block-' ~ configuration.provider|clean_class,
    'block-' ~ plugin_id|clean_class,
  ]
%}
<div{{ attributes.addClass(classes) }}>
  {{ title_prefix }}
  {% if label %}
    <h2{{ title_attributes }}>{{ label }}</h2>
  {% endif %}
  {{ title_suffix }}
  {% block content %}
    {{ content }}
  {% endblock %}
</div>

Copied to block--my-special-block.html.twig
{%
  set classes = [
    'block',
    'block-' ~ configuration.provider|clean_class,
    'block-' ~ plugin_id|clean_class,
  ]
%}
<div{{ attributes.addClass(classes) }}>
  {{ title_prefix }}
  {% if label %}
    <h2{{ title_attributes }}>{{ label }}</h2>
  {% endif %}
  {{ title_suffix }}
  {% block content %}
    <div class=”content-wrapper”>{{ content }}</div>
  {% endblock %}
</div>

This accomplishes your goal. You have a template specific to this particular block, and a wrapper div just where you need it. Following the same method, and with a complex site, you can end up with lots of different block templates (or node templates, or field templates, or … you get the idea.)

But, now you have a different problem. The majority of the template is duplicated. All the CSS classes, the outer wrapper, the markup for the block title, etc. If any of that needs to be changed, like changing all block titles from H2s to H3s, you have to update every single one of those templates.

Even if this happens infrequently enough not to be considered time consuming, it is still prone to errors. You might make a mistake in one template, miss one that needs changing, or even change one that should not be changed.

This is where {% extends %} comes in

Extending templates allows you to reference the original template, and only override the parts that are unique to the child template.

In the block example, we can create a block--my-special-block.html.twig template with this content:

{% extends "block.html.twig" %}
{% block content %}
  <div class=”content-wrapper”>{{ parent() }}</div>
{% endblock %}

That’s it. That is the whole template. Twig uses the original block.html.twig template as the main template, and only uses what we override in the more specific block--my-special-block.html.twig template.

The parent() function simply returns all of the content within the {% block %} tags in the original template. This saves us from having to duplicate that content; keeping the template simple, and future proofing it. If any of that content changes in the original template, we don’t have to update the block--my-special-block.html.twig template.

In this example, the content in the original template is fairly simple, only printing the content variable, but imagine if there was a large amount of multiline html and Twig code wrapped in those block tags.

Twig blocks, not Drupal blocks!

This overriding is done by using Twig blocks. (Terminology is fun!) The Twig block is what you see identified by the {% block %} and {% endblock %} tags. The word "content" is the identifier for the block. You can have multiple blocks in a single template.

In the block--my-special-block.html.twig template file, we can do anything we want inside the block tags. Twig will replace the original templates “block” with the one in block--my-special-block.html.twig.

What else?

Well, you have access to pretty much everything in the main template, except the printed markup. So, for example, you can modify the variables it uses.

{% extends "block.html.twig" %}
{% set attributes = attributes.addClass(‘super-special’) %}

This template will add a CSS class called "super-special" to the attributes printed in the outer wrapper of the original block template. The alternative would be to copy the content of the entire block.html.twig template just to add this class to the ‘classes’ array at the top of the file.

You can also just set a variable that will be used by the original template.

{% extends "block.html.twig" %}
{% set foo = 'yellow' %}

Imagine a series of variant field or content type templates that set variables used by the original template for classes, determining structure, etc.

You can even add Twig logic.

{% extends "block.html.twig" %}
{% block content %}
  {% if foo %}
    <div class=”content-wrapper”>{{ parent() }}</div>
  {% else %}
    {{ parent() }}
  {% endif %}
{% endblock %}

Pretty much anything you still might want to do with Twig, inside or outside of the block tags, you can still do.

Things to note

Before you jump right in, and bang your head against a wall trying to figure out why something isn’t working, there a few things to know.

  • The {% extends %} line needs to be at the top of the file.
  • When overriding markup, you can only change what is within block tags in the original template. So add {% block %} tags around anything you might want to modify.
  • You cannot print additional things outside of the overriding block tags. You will have an extends line. You can set variables, add comments, add logic, and override blocks. You cannot put other pieces of markup in the template. Only markup that is inside a block.
  • If Drupal does not extend the correct template, based on what you expect from template inheritance, you may have to explicitly state the template you want.
    Example: {% extends "@classy/block/block.html.twig" %}
Additional Resources Tagged with Comments
Categories: FLOSS Project Planets

Stefan Behnel: Cython for async networking

Planet Python - Thu, 2016-07-21 02:19

EuroPython 2016 seems to have three major topics this year, two of which make heavy use of Cython. The first, and probably most wonderful topic is beginners. The conference started with a workshop day on Sunday that was split between Django Girls and (other) Python beginners. The effect on the conference is totally visible: lots of new people walking around, visibly more Python beginners, and a clearly better ratio of women to men.

The other two big topics are: async networking and machine learning. Machine learning fills several talks and tutorials, and is obviously backed by Cython implemented tools in many corners.

For async networking, however, it might seem more surprising that Cython has such a good stand. But there are good reasons for it: even mostly I/O bound applications can hugely benefit from processing speed at the different layers, as Anton and I showed in our talk on Monday (see below). The deeper you step down into the machinery, however, the more important that speed becomes. And Yury Selivanov is giving an excellent example for that with his reimplementation of the asyncio event loop in Cython, named uvloop. Here is a blog post announcing uvloop.

Since the final talk recordings are not online yet, I have to refer to the live stream dumps for now.

The talk by Anton Caceres and me (we're both working at Skoobe) on Fast Async Code with Cython and AsyncIO starts at hour/minute 2:20 in the video. We provide examples and give motivations for compiling async code to speed up the processing and cut down the overall response latency. I'm also giving a very quick "Cython in 10 Minutes" intro to the language about half way through the talk.

Yury's talk on High Performance Networking in Python starts at minute 10. He gives a couple of great testimonials for Cython along the way, describing how the async/await support in Cython and the ease of talking to C libraries has enabled him to write a tool that beats the performance of well known async libraries in Go and Javascript.

Categories: FLOSS Project Planets

Kubuntu Podcast #14 – UbPorts interview with Marius Gripsgard

Planet KDE - Wed, 2016-07-20 17:56

Show Hosts

Ovidiu-Florin Bogdan

Rick Timmis

Aaron Honeycutt

Show Schedule Intro

What have we (the hosts) been doing ?

  • Aaron
    • Working a sponsorship out with Linode
    • Working on uCycle
  •  Rick
    • #Brexit – It would be Rude Not to [talk about it]
    • Comodo – Let’s Encrypt Brand challenge https://letsencrypt.org//2016/06/23/defending-our-brand.html#1
Sponsor 1 Segment

Big Blue Button

Those of you that have attended the Kubuntu parties, will have seen our Big Blue Button conference and online education service.

Video, Audio, Presentation, Screenshare and whiteboard tools.

We are very grateful to Fred Dixon and the team at BigBlueButton.org. Go check out their project.

Kubuntu News Elevator Picks

Identify, install and review one app each from the Discover software center and do a short screen demo and review.

In Focus

Joining us today is Marius Gripsgard from the UbPorts project.

https://www.patreon.com/ubports

Sponsor 2 Segment

Linode

We’ve been in talks with Linode, an awesome VPS with super fast SSDs, Data connections, and top notch support. We have worked out a sponsorship for a server to build packages quicker and get to our users faster. BIG SHOUT OUT to Linode for working with us!

Kubuntu Developer Feedback
  • Plasma 5.7 is unlikely to hit Xenial Backports in the short term, as it is still dependent on QT 5.6.1 for which there is currently no build for Xenial.
    There is an experimental build the Acheronuk has been working on, but there are still stability issues.
Game On

Steam Group: http://steamcommunity.com/groups/kubuntu-podcast

Review and gameplay from Shadow Warrior.

Outro

How to contact the Kubuntu Team:

How to contact the Kubuntu Podcast Team:

Categories: FLOSS Project Planets

Plasma’s Publictransport applet’s porting status

Planet KDE - Wed, 2016-07-20 17:10

You might remember that I spoke about Plasma’s Publictransport applet getting some reworking during the summer. It’s been over a month since I made that announcement on my blog and while ideally, I’d have liked to have blogged every week about my work, I haven’t really been able to. This is largely down to the&ellipsisRead the full post »

Categories: FLOSS Project Planets

GVSO Blog: [GSoC 2016: Social API] Week 8: Social Post implementer

Planet Drupal - Wed, 2016-07-20 16:36
[GSoC 2016: Social API] Week 8: Social Post implementer

Week 8 is over and we are just one month away from Google Summer of Code final evaluation. I mentioned in my last weekly summary that I would work on documentation about implementing a Social Auth integration.

gvso Wed, 07/20/2016 - 16:36 Tags Drupal Drupal Planet GSoC 2016
Categories: FLOSS Project Planets

DataCamp: New Free Course: Intro to Python &amp; Machine Learning with Analytics Vidhya

Planet Python - Wed, 2016-07-20 14:17
New Free Course: Intro to Python & Machine Learning (with Analytics Vidhya Hackathons)

The DataCamp team is excited to announce a free course from our friends at Analytics Vidhya. This course begins with an introduction to Python detailing everything from the importance of Python for data scientists to best practices for improving model performance. 

The course serves as an introduction and offers more detail about the basic syntax and data structures of Python, like lists, strings and using Python libraries. After getting a feel for the language and syntax, the course presents exercises on data exploration, data manipulation, and building predictive models. 

The tutorial will show you how to:

  • Explore data through analytic graphs
  • Evaluate and overcome missing data
  • Model with Logistic Regressions, Decision Trees, and Random Forests
  • Use feature engineering and selection to improve your model

Once you have completed this course, you will be better equipped to participate and compete in the data science hackathons that Analytics Vidhya frequently conducts here. So don't wait and get started and sharpen your data science skills! Want to see other topics covered as well? Just let us know on Twitter

Create your own course

Would you like to create your own course? Using DataCamp Teach, you can easily create and host your own interactive tutorial for free. Use the same system DataCamp course creators use to develop their courses, and share your Python knowledge with the rest of the world. 

Categories: FLOSS Project Planets

Third & Grove: Drupal GovCon: Day 1 Recap

Planet Drupal - Wed, 2016-07-20 14:03
Drupal GovCon: Day 1 Recap abby Wed, 07/20/2016 - 14:03
Categories: FLOSS Project Planets

Into my Galaxy: GSoC’ 16: Port Search Configuration module; coding week #8

Planet Drupal - Wed, 2016-07-20 13:50

I have been porting Search Configuration module from Drupal 7 to 8 as part of this year’ s Google Summer of Code (GSoC). This summer program is an opportunity for university students to work on projects connected with open source organisation. I have been really lucky to be a part of this initiative. I could explore deep of more technologies, version control systems as part of my project in Drupal. This gives young students a platform where they are assigned mentors who are experts and experienced in various software.

Last week, I could learn some of the Drupal concepts as part of this module port. So, let me begin with the Drupal 7 property. The t function translates a string to the current language or to a given language. This makes the strings used in Drupal translatable. This generally takes up the format:

t($string, array $args = array(), array $options = array());

Here, $string is the string containing the English text to get translated.

$args: An associative array of replacements to make after translation.

$options: An optional associative array of additional options, with the following elements: lang code and context.

This t function has got some alteration in the Drupal 8. It has been replaced by the $this->t() by making use of \Drupal\Core\StringTranslation\StringTranslationTrait. 

 The translatable markup returns a string as a result of this process.

Another important aspect which I dealt was the roles. This is an important feature  for any module as it  deals with the security constraints of the module. Roles are often manipulated to grant certain permissions. What we have to do is that, initially, load the particular role to be manipulated and then provide the permission which is to be granted.

$role = Role::load('access page.'); $role->grantPermission('access comments'); $role->save();

These role functions help us to load the roles and manipulate the permissions assigned to it quite easily. Thus, turns out to be really helpful in dealing with permissions.

I have been also dealing with writing the simple test for my module. In one of my previous blog posts, I have introduced the PHP unit testing.  The simple test tests the web oriented functionality of the module. It needs a good understanding of the behaviour of the module to write an effective test. Tests are often really important to identify the flaws of a functionality and to correct it accordingly. I will be writing the simple tests for my module in the coming days. I will be sharing you the concept of this mode of testing in my next blog post.

Stay tuned for further developments on this blog post.

 

 

 


Categories: FLOSS Project Planets

Daniel Pocock: How many mobile phone accounts will be hijacked this summer?

Planet Debian - Wed, 2016-07-20 13:48

Summer vacations have been getting tougher in recent years. Airlines cut into your precious vacation time with their online check-in procedures and a dozen reminder messages, there is growing concern about airport security and Brexit has already put one large travel firm into liquidation leaving holidaymakers in limbo.

If that wasn't all bad enough, now there is a new threat: while you are relaxing in the sun, scammers fool your phone company into issuing a replacement SIM card or transferring your mobile number to a new provider and then proceed to use it to take over all your email, social media, Paypal and bank accounts. The same scam has been appearing around the globe, from Britain to Australia and everywhere in between. Many of these scams were predicted in my earlier blog SMS logins: an illusion of security (April 2014) but they are only starting to get publicity now as more aspects of our lives are at risk, scammers are ramping up their exploits and phone companies are floundering under the onslaught.

With the vast majority of Internet users struggling to keep their passwords out of the wrong hands, many organizations have started offering their customers the option of receiving two-factor authentication codes on their mobile phone during login. Rather than making people safer, this has simply given scammers an incentive to seize control of telephones, usually by tricking the phone company to issue a replacement SIM or port the number. It also provides a fresh incentive for criminals to steal phones while cybercriminals have been embedding code into many "free" apps to surreptitiously re-route the text messages and gather other data they need for an identity theft sting.

Sadly, telephone networks were never designed for secure transactions. Telecoms experts have made this clear numerous times. Some of the largest scams in the history of financial services exploited phone verification protocols as the weakest link in the chain, including a $150 million heist reminiscent of Ocean's 11.

For phone companies, SMS messaging came as a side-effect of digital communications for mobile handsets. It is less than one percent of their business. SMS authentication is less than one percent of that. Phone companies lose little or nothing when SMS messages are hijacked so there is little incentive for them to secure it. Nonetheless, like insects riding on an elephant, numerous companies have popped up with a business model that involves linking websites to the wholesale telephone network and dressing it up as a "security" solution. These companies are able to make eye-watering profits by "purchasing" text messages for $0.01 and selling them for $0.02 (one hundred percent gross profit), but they also have nothing to lose when SIM cards are hijacked and therefore minimal incentive to take any responsibility.

Companies like Google, Facebook and Twitter have thrown more fuel on the fire by encouraging and sometimes even demanding users provide mobile phone numbers to "prove they are human" or "protect" their accounts. Through these antics, these high profile companies have given a vast percentage of the population a false sense of confidence in codes delivered by mobile phone, yet the real motivation for these companies does not appear to be security at all: they have worked out that the mobile phone number is the holy grail in cross-referencing vast databases of users and customers from different sources for all sorts of creepy purposes. As most of their services don't involve any financial activity, they have little to lose if accounts are compromised and everything to gain by accurately gathering mobile phone numbers from as many users as possible.


Can you escape your mobile phone while on vacation?

Just how hard is it to get a replacement SIM card or transfer/port a user's phone number while they are on vacation? Many phone companies will accept instructions through a web form or a phone call. Scammers need little more than a user's full name, home address and date of birth: vast lists of these private details are circulating on the black market, sourced from social media, data breaches (99% of which are never detected or made public), marketing companies and even the web sites that encourage your friends to send you free online birthday cards.

Every time a company has asked me to use mobile phone authentication so far, I've opted out and I'll continue to do so. Even if somebody does hijack my phone account while I'm on vacation, the consequences for me are minimal as it will not give them access to any other account or service, can you and your family members say the same thing?

What can be done?
  • Opt-out of mobile phone authentication schemes.
  • Never give the mobile phone number to web sites unless there is a real and pressing need for them to call you.
  • Tell firms you don't have a mobile phone or that you share your phone with your family and can't use it for private authentication.
  • If you need to use two-factor authentication, only use technical solutions such as smart cards or security tokens that have been engineered exclusively for computer security. Leave them in a locked drawer or safe while on vacation. Be wary of anybody who insists on SMS and doesn't offer these other options.
  • Rather than seeking to "protect" accounts, simply close some or all social media accounts to reduce your exposure and eliminate the effort of keeping them "secure" and updating "privacy" settings.
  • If your bank provides a relationship manager or other personal contact, this
    can also provide a higher level of security as they get to know you.

Previous blogs on SMS messaging, security and two factor authentication, including my earlier blog SMS Logins: an illusion of security.

Categories: FLOSS Project Planets

Mike Driscoll: An Intro to coverage.py

Planet Python - Wed, 2016-07-20 13:15

Coverage.py is a 3rd party tool for Python that is used for measuring your code coverage. It was originally created by Ned Batchelder. The term “coverage” in programming circles is typically used to describe the effectiveness of your tests and how much of your code is actually covered by tests. You can use coverage.py with Python 2.6 up to the current version of Python 3 as well as with PyPy.

pip install coverage

Now that we have coverage.py installed, we need some code to use it with. Let’s create a module that we’ll call mymath.py Here’s the code:

def add(a, b): return a + b     def subtract(a, b): return a - b     def multiply(a, b): return a * b     def divide(numerator, denominator): return float(numerator) / denominator

Now we need a test. Let’s create one that tests the add function. Let’s give our test the following name: test_mymath.py. Go ahead and save it in the same location as you did for the previous module. Then add the following code to our test:

# test_mymath.py import mymath import unittest   class TestAdd(unittest.TestCase): """ Test the add function from the mymath library """   def test_add_integers(self): """ Test that the addition of two integers returns the correct total """ result = mymath.add(1, 2) self.assertEqual(result, 3)   def test_add_floats(self): """ Test that the addition of two floats returns the correct result """ result = mymath.add(10.5, 2) self.assertEqual(result, 12.5)   def test_add_strings(self): """ Test the addition of two strings returns the two string as one concatenated string """ result = mymath.add('abc', 'def') self.assertEqual(result, 'abcdef')     if __name__ == '__main__': unittest.main()

Now that we have all the pieces, we can run coverage.py using the test. Open up a terminal and navigate to the folder that contains the mymath module and the test code we wrote. Now we can call coverage.py like this:

coverage run test_mymath.py

Note that we need to call run to get coverage.py to run the module. If your module accepts arguments, you can pass those in as you normally would. When you do this, you will see the test’s output as if you ran it yourself. You will also find a new file in the directory that is called .coverage (note the period at the beginning). To get information out of this file, you will need to run the following command:

coverage report -m

Executing this command will result in the following output:

Name Stmts Miss Cover Missing ---------------------------------------------- mymath.py 9 3 67% 9, 13, 17 test_mymath.py 14 0 100% ---------------------------------------------- TOTAL

The -m flag tells coverage.py that you want it to include the Missing column in the output. If you omit the -m, then you’ll only get the first four columns. What you see here is that coverage ran the test code and determined that I have only 67% of the mymath module covered by my unit test. The “Missing” column tells me what lines of code still need coverage. If you look at the lines coverage.py points out, you will quickly see that my test code doesn’t test the subtract, multiply or divide functions.

Before we try to add more test coverage, let’s learn how to make coverage.py produce an HTML report. To do this, all you need to do is run the following command:

coverage html

This command will create a folder named htmlcov that contains various files. Navigate into that folder and try opening index.html in your browser of choice. On my machine, it loaded a page like this:

You can actually click on the modules listed to load up an annotated web page that shows you what parts of the code are not covered. Since the mymath.py module obviously isn’t covered very well, let’s click on that one. You should end up seeing something like the following:

This screenshot clearly shows what parts of the code were not covered in our original unit test. Now that we know definitively what’s missing in our test coverage, let’s add a unit test for our subtract function and see how that changes things!

Open up your copy of test_mymath.py and add the following class to it:

class TestSubtract(unittest.TestCase): """ Test the subtract function from the mymath library """   def test_subtract_integers(self): """ Test that subtracting integers returns the correct result """ result = mymath.subtract(10, 8) self.assertEqual(result, 2)

Now we need to re-run coverage against the updated test. All you need to do is re-run this command: coverage run test_mymath.py. The output will show that four tests have passed successfully. Now re-run coverage html and re-open the “index.html” file. You should now see the that we’re at 78% coverage:

This is an 11% improvement! Let’s go ahead and add a simple test for the multiply and divide functions and see if we can hit 100% coverage!

class TestMultiply(unittest.TestCase): """ Test the multiply function from the mymath library """   def test_subtract_integers(self): """ Test that multiplying integers returns the correct result """ result = mymath.multiply(5, 50) self.assertEqual(result, 250)     class TestDivide(unittest.TestCase): """ Test the divide function from the mymath library """   def test_divide_by_zero(self): """ Test that multiplying integers returns the correct result """ with self.assertRaises(ZeroDivisionError): result = mymath.divide(8, 0)

Now you can re-run the same commands as before and reload the “index.html” file. When you do, you should see something like the following:

As you can see, we have hit full test coverage! Of course, full coverage in this case means that each function is exercised by our test suite. The problem with this is that we have three times the number of tests for the addition function versus the others, but coverage.py doesn’t give us any kind of data about that. However it will give us a good idea of basic coverage even if it can’t tell us if we’ve tested every possible argument permutation imaginable.

Additional Information

I just wanted to mention a few other features of coverage.py without going into a lot of detail. First, coverage.py supports configuration files. The configuration file format is your classic “.ini” file with sections demarcated by the fact that they are surrounded with square braces (i.e. [my_section]). You can add comments to the config file using the following # or ; (semi-colon).

Coverage.py also allows you to specify what source files you want it to analyze via the configuration file we mentioned previously. Once you have the configuration set up the way you want it, then you can run coverage.py. It also supports a “–source” command-line switch. Finally you can use the “–include” and “–omit” switches to include a list of file name patterns or exclude them. These switches have matching configuration values that you can add to your configuration file too.

The last item that I want to mention is that coverage.py supports plugins. You can write your own or download and install someone else’s plugin to enhance coverage.py.

Wrapping Up

You now know the basics of coverage.py and what this special package is useful for. Coverage.py allows you to check your tests and find holes in your test coverage. If you aren’t sure you’ve got your code tested properly, this package will help you ascertain where the holes are if they exist. Of course, you are still responsible for writing good tests. If your tests aren’t valid but they pass anyway, coverage.py won’t help you.

Categories: FLOSS Project Planets
Syndicate content