FLOSS Project Planets

Enrico Zini: Modern and secure instant messaging

Planet Debian - Wed, 2017-01-11 06:43

Conversations is a really nice, actively developed, up to date XMPP client for Android that has the nice feature of telling you what XEPs are supported by the server one is using:

Some days ago, me and Valhalla played the game of trying to see what happens when one turns them all on: I would send her screenshots from my Conversations, and she would poke at her Prosody to try and turn things on:

Valhalla eventually managed to get all features activated, purely using packages from Jessie+Backports:

The result was a chat system in which I could see the same conversation history on my phone and on my laptop (with gajim)(https://gajim.org/), and have it synced even after a device has been offline,

We could send each other rich media like photos, and could do OMEMO encryption (same as Signal) in group chats.

I now have an XMPP setup which has all the features of the recent fancy chat systems, and on top of that it runs, client and server, on Free Software, which can be audited, it is federated and I can self-host my own server in my own VPS if I want to, with packages supported in Debian.

Valhalla has documented the whole procedure.

If you make a client for a protocol with lots of extension, do like Conversations and implement a status page with the features you'd like to have on the server, and little green indicators showing which are available: it is quite a good motivator for getting them all supported.

Categories: FLOSS Project Planets

PyCharm: Webinar: Adding a REST API to a Django Application

Planet Python - Wed, 2017-01-11 06:20

Calvin Hendryx-Parker (CTO of sixfeetup, and founder of the Indianapolis-based IndyPy user group) hosted a webinar for us where he showed how to add a REST API to a Django application.

In the webinar, he shows how to use djangorestframework. This library makes adding a REST API to a Django application very easy. The framework supports serving JSON and HTML endpoints by default, but can be extended to support other formats like XML and CSV.

Calvin discusses how to use viewsets and routers to make the code concise and effective. Furthermore he shows how to add authentication to the API.

To follow along with Calvin, you can clone the repository from GitHub. To do this right in PyCharm go to VCS | Checkout from Version Control | GitHub, and then use the following repository URL: https://github.com/sixfeetup/ElevenNote.git

PyCharm Professional Edition has several features which make Django development a lot easier: tight integration with the framework itself for the manage.py console, and tailored run configurations. Other Professional Edition features that make web development easier are for example the REST client, and the built-in database tooling (from DataGrip, the JetBrains database IDE).

Please keep in mind that Calvin uses several features that are available only in PyCharm Professional Edition, if you’re using PyCharm Community Edition you can try these features 30 days for free by downloading the latest version of PyCharm Professional Edition.

If you have any questions or comments about the webinar, feel free to leave them in the comments below, or you can reach us on Twitter. Calvin is on Twitter as well, his Twitter handle is @calvinhp.

-PyCharm Team
The Drive to Develop

Categories: FLOSS Project Planets

OSTraining: Contribute Your Code on Drupal.org, Part 5: Upload Your Project

Planet Drupal - Wed, 2017-01-11 05:12

Previously we talked about the different ways in which you can contribute to Drupal, setting up your project, configuring git and connecting and checking that you are connected to your sandbox project. 

Now we are going to upload our project and check the project meets Drupal's standards.

Categories: FLOSS Project Planets

Third & Grove: Granite Construction Drupal Case Study

Planet Drupal - Wed, 2017-01-11 03:00
Granite Construction Drupal Case Study antonella Wed, 01/11/2017 - 03:00
Categories: FLOSS Project Planets

pgcli: Release v1.4.0

Planet Python - Wed, 2017-01-11 03:00

Pgcli is a command line interface for Postgres database that does auto-completion and syntax highlighting. You can install this version using:

$ pip install -U pgcli

Check detailed instructions if you're having difficulty.

Features: Bug Fixes: Internal Changes:
  • Set default data_formatting to nothing. (Thanks: Amjith Ramanujam).
  • Increased minimum prompt_toolkit requirement to 1.0.9. (Thanks: Irina Truong).
Categories: FLOSS Project Planets

Talk Python to Me: #94 Guarenteed packages via Conda and Conda-Forge

Planet Python - Wed, 2017-01-11 03:00
Have you ever had trouble installing a package you wanted to use in your Python app? Likely it contained some odd dependency, required a compilation step, maybe even using an uncommon compiler like Fortran. Did you try it on Windows? How many times have you seen "Cannot find vcvarsall.bat" before you had to take a walk? <br/> <br/> If this sounds familiar, you might want to check conda the package manager, Anaconda, the distribution, conda forge, and conda build. They dramatically lower the bar for installing packages on all the platforms. <br/> <br/> This week you'll meet Phil Elson, Kale Franz, and Michael Sarahan who all work on various parts of this ecosystem. <br/> <br/> <em><strong>Note</strong>: The fact that Continuum, the company behind conda is sponsoring this episode and the topic is about conda is a pure coincidence. This show was recorded long before Continuum came on as a sponsor and they only have a week or two to get the word out about their conference in February. <br/> <br/> I just want to be clear that I featured conda on the show because I believe it's a really cool project. Hope you do too.</em> <br/> <br/> Links from the show: <br/> <div style="font-size: .85em;"> <br/> <b>conda</b>: <a href='http://conda.pydata.org/docs/' target='_blank'>conda.pydata.org</a> <br/> <b>conda-build</b>: <a href='http://conda.pydata.org/docs/commands/build/conda-build.html' target='_blank'>conda.pydata.org/docs/commands/build/conda-build.html</a> <br/> <b>Anaconda distribution</b>: <a href='https://www.continuum.io/anaconda-overview' target='_blank'>continuum.io/anaconda-overview</a> <br/> <b>conda-forge</b>: <a href='https://conda-forge.github.io/' target='_blank'>conda-forge.github.io</a> <br/> <br/> <b>Phil Elson on Twitter</b>: <a href='http://twitter.com/pypelson' target='_blank'>@pypelson</a> <br/> <b>Kale Franz</b>: <a href='https://twitter.com/kalefranz' target='_blank'>@kalefranz</a> <br/> <b>Michael Sarahan</b>: <a href='https://github.com/msarahan' target='_blank'>github.com/msarahan</a> <br/> </div>
Categories: FLOSS Project Planets

Codementor: Cheat Sheet: Python For Data Science

Planet Python - Tue, 2017-01-10 22:51

Starting to learn a new programming language is never easy. And for aspiring data scientists, this can even be more so — most of the times, they come from different kinds of fields of study, or they have already several years of experience in an industry that is very different from the data science industry.

Luckily, there are several resources that you can fall back on, online as well as in real-life. But, specifically for data science, you’ll find the amount of materials that is available can be lacking sometimes: you have general Python cheat sheets that will inform you about the most important things that you need to know to program with Python, but this does not specifically target the data science industry.

To help students that are taking their free Python for Data Science course, DataCamp started a series of cheat sheets that targets those who are just starting with data science and that can use some more material to support their learning.

Table of Contents Python For Data Science Cheat Sheet

The cheat sheet is a handy addition to your learning, as it covers the basics, brought together in seven topics, that any beginner needs to know to get started doing data science with Python.

###Variables and data types

%20float=left" />%20float=left" />%20float=left" alt="python for data science" />To start with Python, you first need to know about variables and data types. That should not come as a surprise, as they are the basics of every programming language.

Variables are used to name and store a value for later use, such as reference or manipulation, by the computer program. To store a value, you assign it to a variable. This is called variable assignment: you set or reset the value that is stored in one or more locations denoted by a variable name.

When you have assigned a value to a variable, your variable gains or changes its data type. The data type specifies which type of value a variable holds and what type of operations can be applied to it. In Python, you can easily assign values to variables like this: x=5. When you then print out or refer to x, you’ll get back the value 5. Naturally, the data type of x will be an integer.

These are just the bare Python basics. The next step is then to do calculations with variables. The ones that the cheat sheet mentions are sum, subtraction, multiplication, exponentiation, remainder and division. You will already know how these operations work and what effect they can have on values, but the cheat sheet also shows you how to perform these operations in Python :)

When you’re just starting out with Python, you might also find it useful to get to know more about certain functions, for example. Luckily, others before you have also had this need, so there is a way to ask for more information: just use the help() method. Don’t forget to pass the element about which you want to know more. In other words, you need to put the element, which in this case is str, in between the parentheses to get back the information you need.

Next, you see that there are some of the most popular built-in data structures listed: strings and lists.

Strings

Strings are one of the basic elements of programming languages in general and this is not much different for Python. Things that you should master when it comes to working with strings are some string operations and string methods.

There are basically four string operations that you need to know to get started on working with strings:

  1. If you multiply your string, you see that the string has become significantly larger, as it concatenates the same string x amount of times to the original one.
  2. If you add a string to your original string, you’ll get back a concatenation of your string and the new string that you have added to it.
  3. You should be able to check whether a certain element is present in your string.
  4. You should know that you can also select elements from strings in Python. Don’t forget here that the index starts at 0; you’ll see this coming back later when you’re working with lists and numpy arrays, but also in other programming languages.

When it comes to string methods, it’s definitely handy to know that you should use the upper() or lower() methods to put your string in uppercase or lowercase, respectively. Also, knowing how to count string elements or how to replace them is no frivolous luxury. And, especially when you’re parsing text, you’ll find a method to strip the whitespace from the ends enormously handy.

You might feel that these strings aren’t immediately what you’ll be using when you start doing data science and this will be mostly true; Text Mining and Natural Language Processing (NLP) are already advanced, but this is no excuse to neglect this data structure!

Lists

%20float=left" />%20float=left" />%20float=left" alt="python for data science" /> Lists, on the other hand, will seem more useful from the start. Lists are used to store an ordered collection of items, which might be of different types but usually they aren’t. The elements that are contained within a list are separated by commas and enclosed in square brackets. In this case, the my_list variable is made up of strings: you have “my” and “list”, but also the variables that are also strings. You see that we have put a reference to NumPy arrays in this section. That’s mainly because there have been some discussions about whether to use lists or arrays in some cases.

The four reasons that most Pythonistas mention to better use NumPy arrays over lists are:

{clearfix}

  • NumPy arrays are more compact than lists,
  • Access in reading and writing items is faster with NumPy,
  • NumPy can be more convenient to work with, thanks to the fact that you get a lot of vector and matrix operations for free,
  • NumPy can be more efficient to work with because they are implemented more efficiently.

For data science and the amount of data that you’ll be working with in real-life situations, it’s also useful for you to know your way around with NumPy arrays.

Lists are easily initialized with the help of square brackets ([]). Note also that you can make lists of lists, as in the variable my_list2! This is especially tricky when you’re first starting out. Next, like with the strings, you also need to know how to select list elements. Make sure that you don’t forget that also here, the index starts at 0.

Libraries

When you have covered some of the absolute basics of Python, it’s time to get started with Python’s data science libraries. The popular ones that you should check out are pandas, NumPy, scikit-learn and matplotlib. But why are these libraries so important for data science?

  • Pandas is used for data manipulation with Python (read more about Pandas). The handy data structures that pandas offers, such as the Series and DataFrame, are indispensable to do data analysis.
  • NumPy, a package that offers the NumPy array as a more efficient alternative data structure to lists, will come in handy when you get your hands dirty with data science.
  • Scikit-learn on the other hand, is the ideal tool if you want to get started on machine learning and data mining.
  • Lastly, matplotlib is one of the basic Python libraries that you need to master to start making impressive data visualizations of your data and analyses.

You immediately see that these four libraries will offer you everything that you need to get started with doing data science.

There will be times when you want to import these libraries entirely to elaborate your analyses, but at other times, you want to perform only a selective import, where you only import a certain number of modules or methods of a library.

Also, there are certain conventions that you need to follow when you import the libraries that have been mentioned above: as such, pandas is imported as .pd, NumPy is imported as .np, scikit-learn is actually .sklearn when you want to import modules, and you import matplotlib.pyplot as .plt.

Right now, these conventions might strike you as odd or totally unnecessary, but you’ll quickly see that it becomes easier as you start to work intensively with them.

Installing Python

Now that you have covered some of the basics, you might want to install Python if you haven’t already. Consider getting one of the Python distributions, such as Anaconda. It’s the leading open data science platform, powered by Python. The absolute advantage of installing Anaconda is that you’ll easily get access to over 720 packages that you can install with conda. But you also have a dependency and environment manager and the Spyder Integrated Development Environment (IDE). And as if these tools weren’t enough, you also get the Jupyter Notebook, an interactive data science environment that allows you to use your favorite data science tools and share your code and analyses with great ease.

In short, all the tools that you need to get started on doing data science with Python!

When you have imported the libraries that you need to do data science, you will probably need to import the most important data structure for scientific computing in Python: the NumPy array.

NumPy Arrays

%20float=left" />%20float=left" />%20float=left" alt="python for data science" /> You’ll see that these arrays look a lot like lists and that, maybe to some surprise, you can convert your lists to NumPy arrays. Remember the performance question of above? This is the solution!

Subsetting and slicing NumPy arrays work very much like with lists. Don’t forget that the index starts at 0 ;) When you look at the operations that you can perform on NumPy arrays, you’ll see that these allow you to allow to subset when you use the < or > operators. You can also multiply and add NumPy arrays. These will, of course, change the values that your array holds.

{clearfix}

Some Useful Basic Statistical Functions

Lastly, there are some functions that will surely come in handy when you’re starting out with Python: there are some of the basic statistical measures such as the mean, median, correlation coefficient, and the standard deviation that you can retrieve with the help of the mean(), median(), corrcoef(), and std(), respectively. You can also insert, delete, and append items to your arrays. Also, make sure not to miss the shape function to get the dimensions of the array. If your array has n rows and m columns, you’ll get back the tuple (m,n); Very handy if you want to inspect your data.

Python For Data Science Cheat Sheet

The first one that was published, was the Python for Data Science cheat sheet. You can click the image below to access the full cheat sheet.

###Author’s Bio

%20width=20%,%20float=right" />%20width=20%,%20float=right" />%20width=20%,%20float=right" alt="python for data science" /> Martijn, is the co-founder of DataCamp, an online interactive education platform for data science that combines fun video instructions with in-browser coding challenges. In his spare time, he keeps himself busy with collecting superhero T-shirts.

Categories: FLOSS Project Planets

Drupal Modules: The One Percent: Drupal Modules: The One Percent — Trash (video tutorial)

Planet Drupal - Tue, 2017-01-10 22:04
Drupal Modules: The One Percent — Trash (video tutorial) NonProfit Tue, 01/10/2017 - 21:04 Episode 13

Here is where we bring awareness to Drupal modules running on less than 1% of reporting sites. Today we'll investigate Trash, a module where deleted content entities are sent to a bin where they can later be reenabled or permanently removed.

Categories: FLOSS Project Planets

Vasudev Ram: Two simple Python object introspection functions

Planet Python - Tue, 2017-01-10 20:02
By Vasudev Ram



While browsing some Python code and docs, I recently got the idea for, and wrote, these two simple convenience functions for introspecting Python objects.

The function oa (for object attributes) can be used to get the attributes of any Python object:
def oa(o):
for at in dir(o):
print at,
(The reason why I don't just type dir(o) instead of using oa(o) (for some object o), is because in IPython (though not in vanilla Python), doing just dir(o) displays the attributes in a vertical line, so the output scrolls off the screen if there are many attributes, while the oa() function prints them horizontally, so the output fits in a few lines without scrolling off.)

And running oa() a few times in the Python shell, gives (shell prompts removed):
oa({})
__class__ __cmp__ __contains__ __delattr__ __delitem__ __doc__ __eq__ __format__
__ge__ __getattribute__ __getitem__ __gt__ __hash__ __init__ __iter__ __le__ __len__
__lt__ __ne__ __new__ __reduce__ __reduce_ex__ __repr__ __setattr__ __setitem__
__sizeof__ __str__ __subclasshook__ clear copy fromkeys get has_key items
iteritems iterkeys itervalues keys pop popitem setdefault update values viewitems
viewkeys viewvalues

# object attributes of a list:
oa([])
__add__ __class__ __contains__ __delattr__ __delitem__ __delslice__ __doc__ __eq__
__format__ __ge__ __getattribute__ __getitem__ __getslice__ __gt__ __hash__ __iadd__
__imul__ __init__ __iter__ __le__ __len__ __lt__ __mul__ __ne__ __new__
__reduce__ __reduce_ex__ __repr__ __reversed__ __rmul__ __setattr__ __setitem__
__setslice__ __sizeof__ __str__ __subclasshook__ append count extend index insert
pop remove reverse sort

# object attributes of an int:
oa(1)
__abs__ __add__ __and__ __class__ __cmp__ __coerce__ __delattr__ __div__ __divmod__
__doc__ __float__ __floordiv__ __format__ __getattribute__ __getnewargs__ __hash__
__hex__ __index__ __init__ __int__ __invert__ __long__ __lshift__ __mod__
__mul__ __neg__ __new__ __nonzero__ __oct__ __or__ __pos__ __pow__ __radd__ __rand__
__rdiv__ __rdivmod__ __reduce__ __reduce_ex__ __repr__ __rfloordiv__ __rlshift__
__rmod__ __rmul__ __ror__ __rpow__ __rrshift__ __rshift__ __rsub__ __rtruediv__
__rxor__ __setattr__ __sizeof__ __str__ __sub__ __subclasshook__ __truediv__
__trunc__ __xor__ bit_length conjugate denominator imag numerator real

The function oar (for object attributes regular, meaning exclude the special or "dunder" methods, i.e. those starting and ending with a double underscore) can be used to get only the "regular" attributes of any python object.
def oar(o):
for at in dir(o):
if not at.startswith('__') and not at.endswith('__'):
print at,
The output from running it:
# regular object attributes of a dict:
oar({})
clear copy fromkeys get has_key items iteritems iterkeys itervalues keys pop popitem
setdefault update values viewitems viewkeys viewvalues

# regular object attributes of an int:
oar(1)
bit_length conjugate denominator imag numerator real

# regular object attributes of a string:
oar('')
_formatter_field_name_split _formatter_parser capitalize center count decode encode
endswith expandtabs find format index isalnum isalpha isdigit islower isspace
istitle isupper join ljust lower lstrip partition replace rfind rindex rjust rpartition
rsplit rstrip split splitlines startswith strip swapcase title translate upper zfill

Here are some more posts about Python introspection.

Enjoy.

- Vasudev Ram - Online Python training and consulting

Get updates (via Gumroad) on my forthcoming apps and content.

Jump to posts: Python * DLang * xtopdf

Subscribe to my blog by email

My ActiveState Code recipes

Follow me on: LinkedIn * Twitter

Managed WordPress Hosting by FlyWheel

Share |



Vasudev Ram
Categories: FLOSS Project Planets

Dirk Eddelbuettel: nanotime 0.1.0: Now on Windows

Planet Debian - Tue, 2017-01-10 19:49

Last month, we released nanotime, a package to work with nanosecond timestamps. See the initial release announcement for some background material and a few first examples.

nanotime relies on the RcppCCTZ package for high(er) resolution time parsing and formatting: R itself stops a little short of a microsecond. And it uses the bit64 package for the actual arithmetic: time at this granularity is commonly represented at (integer) increments (at nanosecond resolution) relative to an offset, for which the standard epoch of Januar 1, 1970 is used. int64 types are a perfect match here, and bit64 gives us an integer64. Naysayers will point out some technical limitations with R's S3 classes, but it works pretty much as needed here.

The one thing we did not have was Windows support. RcppCCTZ and the CCTZ library it uses need real C++11 support, and the g++-4.9 compiler used on Windows falls a little short lacking inter alia a suitable std::get_time() implementation. Enter Dan Dillon who ported this from LLVM's libc++ which lead to Sunday's RcppCCTZ 0.2.0 release.

And now we have all our ducks in a row: everything works on Windows too. The next paragraph summarizes the changes for both this release as well as the initial one last month:

Changes in version 0.1.0 (2017-01-10)
  • Added Windows support thanks to expanded RcppCCTZ (closes #6)

  • Added "mocked up" demo with nanosecond delay networking analysis

  • Added 'fmt' and 'tz' options to output functions, expanded format.nanotime (closing #2 and #3)

  • Added data.frame support

  • Expanded tests

Changes in version 0.0.1 (2016-12-15)
  • Initial CRAN upload.

  • Package is functional and provides examples.

We also have a diff to the previous version thanks to CRANberries. More details and examples are at the nanotime page; code, issue tickets etc at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Daniel Bader: Comprehending Python’s Comprehensions

Planet Python - Tue, 2017-01-10 19:00
Comprehending Python’s Comprehensions

One of my favorite features in Python are list comprehensions. They can seem a bit arcane at first but when you break them down they are actually a very simple construct.

The key to understanding list comprehensions is that they’re just for-loops over a collection expressed in a more terse and compact syntax. Let’s take the following list comprehension as an example:

>>> squares = [x * x for x in range(10)]

It computes a list of all integer square numbers from 0 to 9:

>>> squares [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]

If we wanted to build the same list using a plain for-loop we’d probably write something like this:

>>> squares = [] >>> for x in range(10): ... squares.append(x * x)

That’s a pretty straightforward loop, right? If you try and generalize some of this structure you might end up with a template similar to this:

(values) = [ (expression) for (value) in (collection) ]

The above list comprehension is equivalent to the following plain for-loop:

(values) = [] for (value) in (collection): (values).append( (expression) )

Again, a fairly simple cookiecutter pattern you can apply to most for loops. Now there’s one more useful element we need to add to this template, and that is element filtering with conditions.

List comprehensions can filter values based on some arbitrary condition that decides whether or not the resulting value becomes a part of the output list. Here’s an example:

>>> even_squares = [x * x for x in range(10) if x % 2 == 0]

This list comprehension will compute a list of the squares of all even integers from 0 to 9.

If you’re not familiar with what the modulo (%) operator does—it returns the remainder after division of one number by another. In this example the %-operator gives us an easy way to test if a number is even by checking the remainder after we divide the number by 2.

>>> even_squares [0, 4, 16, 36, 64]

Similarly to the first example, this new list comprehension can be transformed into an equivalent for-loop:

even_squares = [] for x in range(10): if x % 2 == 0: vals.append(x)

Let’s try and generalize the above list comprehension to for-loop transform again. This time we’re going to add a filter condition to our template to decide which values end up in the resulting list.

Here’s the list comprehension template:

values = [expression for value in collection if condition]

And we can transform this list comprehension into a for-loop with the following pattern:

vals = [] for value in collection: if condition: vals.append(expression)

Again, this is a straightforward transformation—we simply apply our cookiecutter pattern again. I hope this dispelled some of the “magic” in how list comprehensions work. They’re really quite a useful tool.

Before you move on I want to point out that Python not only supports list comprehensions but also has similar syntax for sets and dictionaries.

Here’s what a set comprehension looks like:

>>> { x * x for x in range(-9, 10) } set([64, 1, 36, 0, 49, 9, 16, 81, 25, 4])

And this is a dict comprehension:

>>> { x: x * x for x in range(5) } {0: 0, 1: 1, 2: 4, 3: 9, 4: 16}

Both are useful tools in practice. There’s one caveat to Python’s comprehensions—as you get more proficient at using them it becomes easier and easier to write code that’s difficult to read. If you’re not careful you might have to deal with monstrous list, set, dict comprehensions soon. Remember, too much of a good thing is usually a bad thing.

After much chagrin I’m personally drawing the line at one level of nesting for comprehensions. I found that in most cases it’s better (as in “more readable” and “easier to maintain”) to use for-loops beyond that point.

Categories: FLOSS Project Planets

Red Route: Close enough for a side project - how to know when things are good enough

Planet Drupal - Tue, 2017-01-10 17:05

Once upon a time, before I became a web developer, I worked doing sound and light for events. Like web development, the hours tend to be long, and the work tends to attract anti-social oddballs. Unlike web development, you deal with rock stars. Like most professions, it’s full of jargon and in-jokes. One of the phrases in common usage in that industry was “close enough for rock and roll”. Depending on who you were talking to, it might have been jazz rather than rock and roll, but you get the idea. It isn’t too far away from the notion previously popularised by Voltaire: “Perfect is the enemy of good”.

Another formulation of the same idea was expressed slightly more succinctly (and less politely) on the T-shirt shown in the photo attached to this blog post. It was designed by and for my old student union stage crew. While the T-shirt encapsulates a lot of the attitude of the team, it would be a mistake to imagine that we were sloppy. As I’ve written before, we were unpaid, but we certainly weren’t amateurs. That volunteer crew probably had as high a level of professionalism as any team I’ve been a part of. The point was that we cared about the right things, and there comes a point where you have to accept that things are as good as they are realistically going to get.

There’s a point where it just isn’t worth putting in more effort before launch. You’ve done as much as you can, and you’re proving the law of diminishing returns, or the Pareto principle. Besides, if you carry on fixing things, you’ll end up delaying your launch. If you’re getting ready to put on a show, there's a point where you have to open the doors, whether or not things are 100% ready. There are people outside who have bought tickets. They don’t want to wait while you mess about with your final preparations. They’re here to see the show, and their idea of perfection is very likely to be different from yours.

In events, these people are (somewhat dismissively) called ‘punters’, and they don’t notice the same things that you do. Unless they’re in the business themselves, their perception of the event is likely to be very different to yours. Working on stage crew, I’ve cobbled things together with gaffa tape and crossed my fingers, and the show has turned out fantastically well. As a musician, I’ve done shows that I thought were riddled with mistakes, and people in the audience told me it was the best gig they’d seen us play. Punters don’t go to gigs for technically competent sound or lighting, or flawless performances - they go because they want to have a good time. Things are never as shiny backstage as they are front of house.

Similarly, as a developer, I’ve built sites that have won awards, although the code made me cringe. From a technical point of view, you may not be proud of the code or the architecture underlying a website, but that’s not what people are here for. They are visiting your website for the content, or some of the interactivity that it provides. Most sites are not perfect, but they don’t need to be.

There are certain areas where you should never cut corners. In events, that’s things like rigging lights from the ceiling - you have to make sure that they won’t fall on anyone’s head. In development, it’s in security - you have to make sure that people’s data is safe. If there’s a chance people might get hurt, there’s no excuse for sloppiness. But in most areas of most projects, you’re never going to get things perfect - you need to know what’s good enough.

I’ve been working on my own current side project (the redesign and Drupal 8 upgrade of an art gallery listings site) for a while now, in between the day job and family life. Just before Christmas, I still had quite a few tasks left on my board, but having read an article by Ben Roux, I knew that I needed to get the thing live, sooner rather than later. This is the great thing about side projects - there's something marvellously liberating being able to just make that decision for myself, without consultation with clients or stakeholders.

The great thing about putting your work out there on the web is that publishing improvements is trivial. I’ve defined my minimum viable redesign, so I can iterate and improve after launch, even though some of the functionality from the old Drupal 6 version isn’t ready. I could keep polishing and polishing, but it’s better to put it out there, and fix things later. Besides, I’m pretty confident that the new design is better than the old one, in spite of a few rough edges. Chances are, I’m the only person who will pick up on most of the problems with the site.

On the other hand, this means that nothing is ever finished. One of the things I loved about working in events was that there was a clear beginning, middle and end. We would start with an empty room and a full truck. We would unpack the truck, and fill the room with gear. Then we would put on a show. After the show, we’d pack up the gear and put it back into the truck. Then the truck would drive to the next venue, and we could (usually) go home with the satisfaction of a job well done.

With web development, it’s more like endlessly pushing a boulder up a hill. After putting a site live, there’s no great moment of triumph. If we take the analogy of a concert, it’s more like opening the doors and letting the punters in. Unless it’s a short-lived marketing site, the only time you ever take a website down and pack everything away is if the business has failed. There are always things to improve. I could keep adding features, or tweaking the design, or improving performance forever. But what value would it add? Besides, it’s a side project, and I’ve got other things to do.

Tags:  The Gallery Guide Drupal Drupal 8 work All tags
Categories: FLOSS Project Planets

Bálint Réczey: Debian Developer Game of the Year

Planet Debian - Tue, 2017-01-10 17:03

I have just finished level one, fixing all RC bugs in packages under my name, even in team-maintained ones.

Categories: FLOSS Project Planets

John Svensson: Uninstall Drupal 8 modules that includes default configuration

Planet Drupal - Tue, 2017-01-10 17:02

In our modules we can include default configuration to ship content types, views, vocabularies on install. If we try to reinstall the module, we'll get an error. Let's take a look at why and how we can solve it.

Unable to install … already exist in active configuration

During uninstall the configuration is not removed because it has no connection to the module itself and thus we get the error because Drupal modules may not replace active configuration. The reason for that is to prevent configuration losses.

We can move all the configuration to config/optional rather than config/install which basically means that configuration that does not exists will be installed and the existing one will be ignored. If we do this we can reinstall the module.

So, if we want the configuration to remain active we've already solved the problem, but if we don't want that, we want the module to remove all configuration and contents its provided we'll need to look at another solution.

Let's take a look at the Article content type that is created when using the Standard distribution on installing Drupal.

node.type.article.yml

langcode: en status: true dependencies: { } name: Article type: article description: 'Use <em>articles</em> for time-sensitive content like news, press releases or blog posts.' help: '' new_revision: true preview_mode: 1 display_submitted: true

field.field.node.article.body.yml

langcode: en status: true dependencies: config: - field.storage.node.body - node.type.article module: - text id: node.article.body field_name: body entity_type: node bundle: article label: Body description: '' required: false translatable: true default_value: { } default_value_callback: '' settings: display_summary: true field_type: text_with_summary

If we delete the content type, we can find out that the field instance body on the content type has been removed. First we actually see that the configuration will be removed when confirming the deletion of the content type, but also by looking in the database in the config table where active configuration is stored. It's gone.

The reason it's deleted is because it has an dependency on the node.type.article.yml configuration as we can see:

dependencies: config: - field.storage.node.body - node.type.article module: - text

So what we need to do to make sure the content type we create when installing or module, that it's configuration uses our module as an dependency. So let's take a look at how we can do that:

Let's imagine we have a custom_event module that creates a event content type.

node.type.event.yml

langcode: en status: true dependencies: enforced: module: - custom_event third_party_settings: {} name: Event type: event description: 'Event' help: '' new_revision: true preview_mode: 1 display_submitted: false

The dependencies-part is the interesting one:

dependencies: enforced: module: - custom_event

We have defined a custom_event module, in that module we have some exported configuration files in the config/install folder. We update the node.type.event.yml configuration file to have our module as an dependency. Now when we uninstall the module, the content type will be removed.

We also have to do this for our views, taxonomy, and field storages, or pretty much any configuration entity we provide configuration for. We don't have to worry about field instances, as we saw above those are dependent on the content type itself, but field storages on the other hand does not depend on a content type because you can reuse fields on multiple of those.

So, just add the module as an dependency and you're good to go, here's an example on a field storage field_image

field.storage.node.field_image.yml

langcode: en status: true dependencies: enforced: module: - custom_event module: - file - image - node id: node.field_image field_name: field_image entity_type: node type: image settings: uri_scheme: public default_image: uuid: null alt: '' title: '' width: null height: null target_type: file display_field: false display_default: false module: image locked: false cardinality: 1 translatable: true indexes: target_id: - target_id persist_with_no_fields: false custom_storage: false
Categories: FLOSS Project Planets

Palantir: Prominent Midwest Business School

Planet Drupal - Tue, 2017-01-10 14:35
Prominent Midwest Business School brandt Tue, 01/10/2017 - 13:35 Visibility of Research and Ideas Through a Beautiful Design

A highly customizable layout to showcase our client's content.

Highlights
  • Highly customizable layout
  • A strong visual hierarchy punctuated with thoughtful typographic details  
  • Robust tagging and taxonomy for showcasing content

We want to make your project a success.

Let's Chat.

One of the oldest business schools in the country with a worldwide network of over 50,000 alumni, our client is one of the top-ranked business schools in the world. With a reputation for new ideas and research, it needs its online presence to reflect its bold, innovative approach to business education.

Our client’s approach to learning is to, “constantly question, test ideas, and seek proof,” which leads to new ideas and innovative solutions within business. Beyond the classroom, this approach empowers thought leaders and analytical minds to shape the future through deeper analysis and discovery.

In order to create greater visibility of the school’s research and ideas, the school publishes digital and print publications that give insight to business leaders and policymakers who can act upon those ideas. The publications help convey the concepts and research findings generated by the school, as well as by academics outside the school, via articles, infographics, charts, videos, and other devices.

Since both publications were due for an upgrade, our client decided to update its web presence and quarterly print magazine at the same time. With our client having already partnered with another firm to complete the magazine design, Palantir was brought in to design and develop the website in a way that provided cohesion between the two mediums.

Goals and Direction

Because the publication’s previous site was outdated and not optimized for mobile devices, the overall performance was less than ideal, with long loading times causing problems on tablets and smartphones. Additionally, the hierarchy on the old site needed some rethinking: the site was difficult to navigate, with limitations on how articles and videos could be connected, since visitors couldn’t easily or effectively filter or search the contents of the site.

The primary high-level goals for the redesign were to:

  • Move the site to Drupal, which allowed for increased community support and more flexibility in tools that could be incorporated
  • Provide an experience that demonstrated the depth of content from the publication
  • Optimize content search and social visibility
  • Provide a flexible platform that would evolve as the content evolved
  • Highlight visuals and alternative story formats
  • Leverage content featuring the business school’s high-profile faculty

To accomplish the majority of these goals, a well-planned content strategy was required. The new site needed the ability to display content in a variety of ways in order to make it digestible and actionable for readers. All content had to be easily sharable and optimized for the greatest possibility of distribution. Lastly once readers found content they wanted, related content should surface as well.

To support the content strategy, site editors needed the power to elevate content connections and relations through a strong visual design that offered flexibility, and with hierarchy that made sure news matched what visitors were looking for. They also required options on how to best showcase content, as well as the ability to include multiple storytelling mechanisms.

Examples of how related content is displayed in both desktop and mobile views while preserving typographic details.Site Focus

Once goals were outlined, Palantir worked to create a solid content strategy for the site. Taxonomy and tagging were addressed, and a new information architecture was put in place in order to make sure content was being seen and surfaced at the right time. After the initial strategy work was completed, it was decided to focus the work in two key areas:

  • Increase the number of pages during each visit. Prior to the redesign, a large percentage of visitors were coming in from search and social for a single article and then leaving before accessing any more articles, videos, or graphics. The former content management system made tying in related content difficult, so attention was given to each step of the process to make sure that related content could be easily surfaced by editors or by the system based on tagging. A goal was to give visitors more of the type of content they were looking for.
  • Improve number of visits per month. As important as it is to make navigating the site better, ensuring that visitors could find the content more easily was a key improvement that was needed for the site. As part of that, the site required better metadata for search and social optimization.
Style tile: Playful/Colorful/ProgressiveStyle tile: Dynamic/Modern/Bright

The design of the site was critical to achieving the goals, and we were given latitude to push the boundaries of the existing brand standards in order to give the magazine a distinctive look through color and typography. Our design team expanded the existing brand signals from our client in order to create style tiles to represent the general mood, typography, and colors recommended for the new site. With the variations in image treatment, typography, color palette, and button/quote designs, each style tile worked to capture a specific voice and personality for the magazine. This allowed us to quickly gauge how our client wanted to present itself to the world prior to starting formal layouts.

Once the final direction was chosen, we concentrated next on wireframing the pages so we could marry the functionality with the intended hierarchy and ensure that the revised content strategy was working as efficiently and powerfully as possible. Given that users spent the majority of time on those pages, individual article pages were given particular emphasis in wireframing, to ensure users would be enticed to explore further into the site.

In the layout phase, typography was given special attention, with drop-caps and call-out quotes being implemented to keep a print feel and break up a sea of gray text for long articles. This approach was particularly necessary on the mobile view, allowing the content to take center stage without losing any of the design details. Sharing on social media was also accounted for, with shorter quotes having their own dedicated sharing buttons. Additionally, the wealth of artwork provided by the school’s internal team created exciting opportunities for our design team on all applicable articles.

While not always the entry point for users, the homepage design needed to be as flexible as possible. Not only did we have to account for the hierarchy of stories and surface related content in the right way, but to allow for multiple types of content to be displayed while balancing related articles and email newsletter sign-up forms.

Examples of how the homepage can be customized to promote specific sections of content.  The Results

In the end, our client received a beautiful design that showcases the content well, and provides easy access to related content. Back-end editors are happy with the improved formatting and editing ability, with one editor saying, “it’s not even comparable to our previous site.” Our designs introduced new ways of visually displaying a wealth of content and helped push innovation for the magazine redesign. The print designers were able to apply our design signals to the associated print piece, creating a strong cohesion across all collateral to enhance the school’s brand.

The variety offered by the new homepage design supported the goal of increasing the page views, providing more opportunities to entice people to continue their experience, with connections between different articles and videos. With much of the greater University on Drupal, the move to Drupal also allowed the staff increased support within the school community. The flexibility of the modular design and build allowed staff to control the process and to be able to leverage important research to audiences and policymakers for years to come.

Content collapsed into mobile views.

 

We want to make your project a success.

Let's Chat. Drupal Services strategy design development
Categories: FLOSS Project Planets

Python Data: Jupyter with Vagrant

Planet Python - Tue, 2017-01-10 14:03

I’ve written about using vagrant for 99.9% of my python work on here before (see here and here for examples).   In addition to vagrant, I use jupyter notebooks on 99.9% of the work that I do, so I figured I’d spend a little time describing how I use jupyter with vagrant.

First off, you’ll need to have vagrant set up and running (descriptions for linux, MacOS, Windows).   Once you have vagrant installed, we need to make a few changes to the VagrantFile to allow port forwarding from the vagrant virtual machine to the browser on your computer. If you followed the Vagrant on Windows post, you’ll have already set up the configuration that you need for vagrant to forward the necessary port for jupyter.   For those that haven’t read that post, below are the tweaks you need to make.

My default VagrantFile is shown in figure 1 below.

Figure 1: VagrantFile Example

You’ll only need to change 1 line to get port forwarding working.   You’ll need to change the line that reads:

# config.vm.network "forwarded_port", guest: 80, host: 8080

to the following:

 config.vm.network "forwarded_port", guest: 8888, host: 8888

This line will forward port 8888 on the guest to port 8888 on the host. If you aren’t using the default port of 8888 for jupyter, you’ll need to change ‘8888’ to the port you wish to use.

Now that the VagrantFile is ready to go, do a quick ‘vagrant up’ and ‘vagrant ssh’ to start your vagrant VM and log into it. Next, set up any virtual environments that you want / need (I use virtualenv to set up a virtual environment for every project).  You can skip this step if you wish, but it is recommended.

If you set up a virtual environment, go ahead and source into it so that you are using a clean environment and then run the command below to install jupyter. If you didn’t go then you can just run the below to install jupyter.

pip install jupyter

You are all set.  Jupyter should be installed and ready to go. To run it so it is accessible from your browser, just run the following command:

jupyter notebook --ip=0.0.0.0

This command tells jupyter to listen on any IP address.

In your browser,  you should be able to visit your new fangled jupyter (via vagrant) instance by visiting the following url:

http://0.0.0.0:8888/tree

Now you’re ready to go with jupyter with vagrant.

Note: If you are wanting / needing to learn Jupyter, I highly recommend Learning IPython for Interactive Computing and Data Visualization (amazon affiliate link). I recommend it to all my clients who are just getting started with jupyter and ipython.

 

 

The post Jupyter with Vagrant appeared first on Python Data.

Categories: FLOSS Project Planets

Enthought: Loading Data Into a Pandas DataFrame: The Hard Way, and The Easy Way

Planet Python - Tue, 2017-01-10 13:44

Data exploration, manipulation, and visualization start with loading data, be it from files or from a URL. Pandas has become the go-to library for all things data analysis in Python, but if your intention is to jump straight into data exploration and manipulation, the Canopy Data Import Tool can help, instead of having to learn the details of programming with the Pandas library.

The Data Import Tool leverages the power of Pandas while providing an interactive UI, allowing you to visually explore and experiment with the DataFrame (the Pandas equivalent of a spreadsheet or a SQL table), without having to know the details of the Pandas-specific function calls and arguments. The Data Import Tool keeps track of all of the changes you make (in the form of Python code). That way, when you are done finding the right workflow for your data set, the Tool has a record of the series of actions you performed on the DataFrame, and you can apply them to future data sets for even faster data wrangling in the future.

At the same time, the Tool can help you pick up how to use the Pandas library, while still getting work done. For every action you perform in the graphical interface, the Tool generates the appropriate Pandas/Python code, allowing you to see and relate the tasks to the corresponding Pandas code.

With the Data Import Tool, loading data is as simple as choosing a file or pasting a URL. If a file is chosen, it automatically determines the format of the file, whether or not the file is compressed, and intelligently loads the contents of the file into a Pandas DataFrame. It does so while taking into account various possibilities that often throw a monkey wrench into initial data loading: that the file might contain lines that are comments, it might contain a header row, the values in different columns could be of different types e.g. DateTime or Boolean, and many more possibilities as well.

The Data Import Tool makes loading data into a Pandas DataFrame as simple as choosing a file or pasting a URL.

A Glimpse into Loading Data into Pandas DataFrames (The Hard Way)

The following 4 “inconvenience” examples show typical problems (and the manual solutions) that might arise if you are writing Pandas code to load data, which are automatically solved by the Data Import Tool, saving you time and frustration, and allowing you to get to the important work of data analysis more quickly.

Let’s say you were to load data from the file by yourself. After searching the Pandas documentation a bit, you will come across the pandas.read_table function which loads the contents of a file into a Pandas DataFrame. But it’s never so easy in practice: pandas.read_table and other functions you might find assume certain defaults, which might be at odds with the data in your file.

Inconvenience #1: Data in the first row will automatically be used as a header.  Let’s say that your file (like this one: [wind.data]) uses whitespace as the separator between columns and doesn’t have a row containing column names. pandas.read_table assumes by default that your file contains a header row and uses tabs for delimiters. If you don’t tell it otherwise, Pandas will use the data from the first row in your file as column names, which is clearly wrong in this case.

From the docs, you can discover that this behavior can be turned off by passing header=None and use sep=\s+ to pandas.read_table, to use varying whitespace as the separator and to inform pandas that a header column doesn’t exist:

In [1]: df = pandas.read_table('wind.data', sep='\s+')
In [2]: df.head()
Out[2]:
61  1  1.1  15.04  14.96  13.17   9.29  13.96  9.87  13.67  10.25  10.83  \
0  61  1    2  14.71  16.88  10.83   6.50  12.62  7.67  11.50  10.04   9.79
1  61  1    3  18.50  16.88  12.33  10.13  11.17  6.17  11.25   8.04   8.50
12.58  18.50  15.04.1
0   9.67  17.54    13.83
1   7.67  12.75    12.71

Without the header=None kwarg, you can see that the first row of data is being considered as column names:

In [3]: df = pandas.read_table('wind.data', header=None, sep='\s+')
In [4]: df.head()
Out[4]:
0   1   2      3      4      5      6      7     8      9      10     11  \
0  61   1   1  15.04  14.96  13.17   9.29  13.96  9.87  13.67  10.25  10.83
1  61   1   2  14.71  16.88  10.83   6.50  12.62  7.67  11.50  10.04   9.79
12     13     14
0  12.58  18.50  15.04
1   9.67  17.54  13.83

The behavior we expected, after we tell Pandas that the file does not contain a row containing column names using header=None

and specify the separator:

[File : test_data_comments.txt]

Inconvenience #2: Commented lines cause the data load to fail.  Next let’s say that your file contains commented lines which start with a #. Pandas doesn’t understand this by default and trying to load the data into a DataFrame will either fail with an Error or worse, succeed without notifying you that one row in the DataFrame might contain erroneous data, from the commented line.  (This might also prevent correct inference of column types.)

Again, you can tell pandas.read_table that commented lines exist in your file and to skip them using comment=#:

In [1]: df = pandas.read_table('test_data_comments.txt', sep=',', header=None)
---------------------------------------------------------------------------
CParserError                              Traceback (most recent call last)
<ipython-input-10-b5cd8eee4851> in <module>()
----> 1 df = pandas.read_table('catalyst/tests/data/test_data_comments.txt', sep=',', header=None)
(traceback)
CParserError: Error tokenizing data. C error: Expected 1 fields in line 2, saw 5

As mentioned earlier, if you are lucky, Pandas will fail with a CParserError, complaining that each row contains a different number of columns in the data file.  Needless to say, it’s not obvious to tell that this is an unidentified comment line:

In [2]: df = pandas.read_table('test_data_comments.txt', sep=',', comment='#', header=None)
In [3]: df
Out[3]:
0   1    2      3            4
0  1  False  1.0    one   2015-01-01
1  2   True  2.0    two   2015-01-02
2  3  False  3.0  three  2015-01-03
3  4   True  4.0  four  2015-01-04

And we can read the file contents correctly when we tell pandas that ‘#’ is the character that commented lines in the file start with, as is seen in the following file:

[File : ncaa_basketball_2016.txt]

Inconvenience #3: Fixed-width formatted data will cause data load to fail.  Now let’s say that your file contains data in a fixed-width format. Trying to load this data using pandas.read_table will fail.

Dig around a little and you will come across the function pandas.read_fwf, which is the suggested way to load data from fixed-width files, not pandas.read_table.

In [1]: df = pandas.read_table('ncaa_basketball_2016.txt', header=None)
In [2]: df.head()
Out[2]:
0
0  2016-02-25 @Ark Little Rock          72  UT Ar...
1  2016-02-25  ULM                      66 @South...

Those of you familiar with Pandas will recognize that the above DataFrame, created from the file, contains only one column, labelled 0. Which is clearly wrong, because there are 4 distinct columns in the file.

In [3]: df = pandas.read_table('ncaa_basketball_2016.txt', header=None, sep='\s+')
---------------------------------------------------------------------------
CParserError                              Traceback (most recent call last)
<ipython-input-28-db4f2f128b37> in <module>()
----> 1 df = pandas.read_table('functional_tests/data/ncaa_basketball_2016.txt', header=None, sep='\s+')
(Traceback)
CParserError: Error tokenizing data. C error: Expected 8 fields in line 55, saw 9

If we didn’t know better, we would’ve assumed that the delimiter/separator character used in the file was whitespace. We can tell Pandas to load the file again, assuming that the separator was whitespace, represented using \s+. But, as you can clearly see above, that raises a CParserError, complaining that it noticed more columns of data in one row than the previous.

In [4]: df = pandas.read_fwf('ncaa_basketball_2016.txt', header=None)
In [5]: df.head()
Out[5]:
0                 1   2               3   4    5
0  2016-02-25  @Ark Little Rock  72    UT Arlington  60  NaN
1  2016-02-25               ULM  66  @South Alabama  59  NaN

And finally, using pandas.read_fwf instead of pandas.read_table gives us a DataFrame that is close to what we expected, given the data in the file.

Inconvenience #4: NA is not recognized as text; automatically converted to ‘None’:  Finally, let’s assume that you have raw data containing the string NA, which is this specific case is used to represent North America. By default pandas.read_csv interprets these string values to represent None and automatically converts them to None. And Pandas does all of this underneath the hood, without informing the user. One of the things that the Zen of Python says is that Explicit is better than implicit. In that spirit, the Tool explicitly lists the values which will be interpreted as None/NaN.

The user can remove NA (or any of the other values) from this list, to prevent it from being interpreted as None, as shown in the following file:

[File : test_data_na_values.csv]

In [2]: df = pandas.read_table('test_data_na_values.csv', sep=',', header=None)
In [3]: df
Out[3]:
0  1       2
0 NaN  1    True
1 NaN  2   False
2 NaN  3   False
3 NaN  4    True
In [4]: df = pandas.read_table('test_data_na_values.csv', sep=',', header=None, keep_default_na=False, na_values=[])
In [5]: df
Out[5]:
0  1       2
0  NA  1    True
1  NA  2   False
2  NA  3   False
3  NA  4    True

If your intentions were to jump straight into data exploration and manipulation, then the above points are some of the inconveniences that you will have to deal with, requiring you to learn the various arguments that need to be passed to pandas.read_table before can load your data correctly and get to your analysis.

Loading Data with the Data Import Tool (The Easy Way)

The Canopy Data Import Tool automatically circumvents several common data loading inconveniences and errors by simply setting up the correct file assumptions in the Edit Command dialog box.

The Data Import Tool takes care of all of these problems for you, allowing you to fast forward to the important work of data exploration and manipulation. It automatically:

  1. Infers if your file contains a row of column names or not;
  2. Intelligently infers if your file contains any commented lines and what the comment character is;
  3. Infers what delimiter is used in the file or if the file contains data in a fixed-width format.

Download Canopy (free) and start a free trial of the Data Import Tool to see just how much time and frustration you can save!

The Data Import Tool as a Learning Resource: Using Auto-Generated Python/Pandas code

So far, we talked about how the Tool can help you get started with data exploration, without the need for you to understand the Pandas library and its intricacies. But, what if you were also interested in learning about the Pandas library? That’s where the Python Code pane in the Data Import Tool can help.

As you can see from the screenshot below, the Data Import Tool generates Pandas/Python code for every command you execute. This way, you can explore and learn about the Pandas library using the Tool.

View the underlying Python / Pandas code in the Data Import Tool to help learn Pandas code, without slowing down your work.

Finally, once you are done loading data from the file and manipulating the DataFrame, you can export the DataFrame to Canopy’s IPython console for further analysis and visualization. Simply click Use DataFrame at the bottom-right corner and the Tool will export the DataFrame to Canopy’s IPython pane, as you can see below.

Import the cleaned data into the Canopy IPython console for further data analysis and visualization.

 Ready to try the Canopy Data Import Tool?

Download Canopy (free) and click on the icon to start a free trial of the Data Import Tool today

Additional resources:

Watch a 2-minute demo video to see how the Canopy Data Import Tool works:

See the Webinar “Fast Forward Through Data Analysis Dirty Work” for examples of how the Canopy Data Import Tool accelerates data munging:

Categories: FLOSS Project Planets

Fuse Interactive: UX for Content Editors & Administrators

Planet Drupal - Tue, 2017-01-10 13:44

User Experience as it relates to Content Management Systems tends to overlook its most important users: Content Editors & Administrators.

Categories: FLOSS Project Planets

Wiki, what’s going on? (Part20-2017 is here)

Planet KDE - Tue, 2017-01-10 12:11

 

 

The hype is great: WikiToLearn India Conf2017 is almost here!

 

Hello WikiToLearn-ers! First of all, let me wish a happy new year to all of you!

How better to start the new year? With lot of news!

In less than two weeks WikiToLearn India Conf2017 is about to happen. We are extremely happy because this is the first big international event entirely dedicated to WikiToLearn. We have to thank the members of our community who are working hard to provide you this amazing event. For sure, the best thing about this conference is the great variety of speakers: Ruphy is flying from Italy to India to attend the conference and give a talk about WTL. For this event we have speakers lined up from Mediawiki, KDE and Mozilla Community. Several projects and ideas will meet at WTL India Conf2017 and this is simply amazing for us! The entire event will be recorded and videos will be uploaded online: you won’t miss any talk!

We have planned other great things for this 2017. Few days ago some members of the community met to have a discussion about our targets for the future. We came up with a new strategic plan for the incoming months: join our communication channels to discuss it with us. New talks, new posters and technical improvements are just around the corner. WikiToLearn1.0 is great, but what’s coming now is even better!

2016 was fantastic for us, but in 2017 a turning point is waiting for us. Stay tuned!

 

L'articolo Wiki, what’s going on? (Part20-2017 is here) sembra essere il primo su Blogs from WikiToLearn.

Categories: FLOSS Project Planets
Syndicate content