Feeds

Here's a sneak peek at LibrePlanet 2017: Register today!

FSF Blogs - Thu, 2017-02-16 14:17

Don't delay: Register today to ensure that you will attend LibrePlanet 2017: The Roots of Freedom. Remember, FSF members and students attend gratis.

Hundreds of people from across the planet will gather at LibrePlanet 2017: The Roots of Freedom at MIT in Cambridge, Massachusetts. This year's conference speakers will examine the foundations of software freedom and the ideas and projects they inspired.

Four keynote speakers will anchor the event. Kade Crockford, director of the Technology for Liberty program of the American Civil Liberties Union of Massachusetts, will kick things off on Saturday morning by sharing how technologists can enlist in the growing fight for civil liberties. On Saturday night, Free Software Foundation president Richard Stallman will present the Free Software Awards and discuss pressing threats and important opportunities for software freedom.

Day two will begin with Cory Doctorow, science fiction author and special consultant to the Electronic Frontier Foundation, revealing how to eradicate all Digital Restrictions Management (DRM) in a decade. The conference will draw to a close with Sumana Harihareswara, leader, speaker, and advocate for free software and communities, giving a talk entitled "Lessons, Myths, and Lenses: What I Wish I'd Known in 1998."

That's not all. We'll hear about the GNU philosophy from Marianne Corvellec of the French free software organization April, Joey Hess will touch on encryption with a talk about backing up your GPG keys, and Denver Gingerich will update us on a crucial free software need: the mobile phone.

Others will look at ways to grow the free software movement: through cross-pollination with other activist movements, removal of barriers to free software use and contribution, and new ideas for free software as paid work. Speakers will include Software Freedom Conservancy's director of strategic initiatives Brett Smith, blind free software activist Chris Hofstader, and Micky Metts of the Cambridge, Massachusetts Web development collective Agaric. The full program will be published soon. In the meantime, you can see the list of confirmed speakers.

Each year at LibrePlanet, we gather software developers, activists, policy experts, and computer users to share accomplishments, learn skills, and address challenges to software freedom. Newcomers are always welcome, and LibrePlanet 2017 will feature programming for a broad range of experience levels, including students.

When planning your travel, keep in mind that while the conference proper will be Saturday and Sunday, there will be social events on Friday, Saturday, and Sunday evening.

LibrePlanet 2017 is produced in partnership by the Free Software Foundation with the Student Information Processing Board (SIPB) at MIT.

Pre-order a LibrePlanet 2017 t-shirt by March 6th

You can also pre-order a LibrePlanet 2017 commemorative t-shirt in the GNU Press shop. Order your shirt by March 6th, 7am EST/13:00 UTC to guarantee availability in your size. If you will be picking up the shirt at the conference, use the code LP17 to waive shipping costs. If you want it shipped to you, don't use that code, and expect it to arrive after the conference.

Volunteers make LibrePlanet awesome

LibrePlanet has grown in scope and attendance over the years—it started out as a Free Software Foundation membership meeting. This conference would never have become the highly-anticipated event it is today without the help of dozens of volunteers who make things happen, before and during the conference—and it's a great way to meet fellow community members. There are even ways to help if you can't attend in person! If you are interested in helping out with LibrePlanet 2017, email resources@fsf.org. We show our appreciation for our volunteers by offering gratis conference admission and a LibrePlanet t-shirt.

Don't miss out on your chance to explore the roots of freedom. Register for LibrePlanet 2017 today!

Categories: FLOSS Project Planets

Valuebound: Beginner’s guide to Mail System in Drupal 7 and 8

Planet Drupal - Thu, 2017-02-16 13:40

This blog is all about How Drupal handles the Mail system & the stages it has to go through.
In Drupal to sends an email we need to take care of two rules

  1. Declare all the required properties under hook_mail().
  2. Call drupal_mail() with argument for actually sending the email

However in the scenario like bigger & complex site the above steps won’t be enough. But Drupal gives us a Flexibility to customize email sending process, before that it’s necessary to know how stuff works behind the scenes first. In this article I’ll show you how you can customize and extend the Drupal mail system to fulfill your needs.

While sending an email drupal_mail() function uses system class for…

Categories: FLOSS Project Planets

Mike Driscoll: Python’s New secrets Module

Planet Python - Thu, 2017-02-16 13:15

Python 3.6 added a new module called secrets that is designed “to provide an obvious way to reliably generate cryptographically strong pseudo-random values suitable for managing secrets, such as account authentication, tokens, and similar”. Python’s random module was never designed for cryptographic use but for modeling and simulation. Of course, you could always use the urandom() function from Python’s os module:

>>> import os >>> os.urandom(8) '\x9c\xc2WCzS\x95\xc8'

But now that we have the secrets module, we can create our own “cryptographically strong pseudo-random values”. Here’s a simple example:

>>> import secrets >>> import string >>> characters = string.ascii_letters + string.digits >>> bad_password = ''.join(secrets.choice(characters) for i in range(8)) >>> bad_password 'SRvM54Z1'

In this example, we import the secrets and the string modules. Next we create a string of uppercase letters and integers. Finally we use the secrets module’s choice() method to choose characters randomly to generate a bad password. The reason I am calling this a bad password is because we aren’t using mixed case, numbers and symbols in the password. This is actually a fairly decent one compared with what a lot of people use. One of my readers pointed out that another reason this could be considered bad is that the user would probably just write it down on a piece of paper. While that may be true, using dictionary words is typically strongly discouraged so you should learn to use passwords like this or invest in a secure password manager.

Generating Tokens with secrets

The secrets module also provides several methods of generating tokens. Here are some examples:

>>>: secrets.token_bytes() b'\xd1Od\xe0\xe4\xf8Rn</U\xf4G\xdb\x08\xa8\x85\xeb\xba>\x8cO\xa7XV\x1cb\xd6\x11\xa0\xcaK'   >>> secrets.token_bytes(8) b'\xfc,9y\xbe]\x0e\xfb'   >>> secrets.token_hex(16) '6cf3baf51c12ebfcbe26d08b6bbe1ac0'   >>> secrets.token_urlsafe(16) '5t_jLGlV8yp2Q5tolvBesQ'

The token_bytes function will return a random byte string containing nbytes number of bytes. I didn’t supply a number of bytes in the first example, so Python chose a reasonable number for me. Then I tried calling it again and asking for 8 bytes. The next function we tried is token_hex, which will return a random string in hexadecimal. The last function is token_urlsafe which will return a random URL-safe text string. The text is Base64 encoded too! Note that in practice you should probably use at least 32 bytes for your tokens for them to be secure against a brute-force attack (source).

Wrapping Up

The secrets module is a worthy addition to Python. Frankly I thought something like this should have been added a long time ago. But at least now we have it and now we can safely generate cryptographically strong tokens and passwords. Take some time to check out the documentation for this module as it has a few fun recipes to play around with.

Related Readings

Categories: FLOSS Project Planets

NumFOCUS: Welcome Back, Gina - Communications Director

Planet Python - Thu, 2017-02-16 12:30
NumFOCUS is very excited to welcome Gina Helfrich back to our team! Gina Helfrich first joined NumFOCUS in 2015 as Communications Director; she was the second full-time staff member at NumFOCUS. Gina is a highly skilled communications professional with experience in a variety of professional settings, from higher education to nonprofits to technology. Gina left NumFOCUS in 2016 to co-found recruitHER, a women-owned recruiting & consulting firm committed to making the tech industry more inclusive and diverse. The marketing and communications strategy Gina developed and executed for recruitHER attracted national attention, including feature stories in both Austin Monthly and Austin Woman magazines, a talk at SXSW, and an interview on the Stuff Mom Never Told You podcast. 

Gina is an expert in women's leadership and gender equity with over 12 years of experience working to advance diversity and inclusion. She earned her Ph.D. in Philosophy and Women's Studies from Emory University and is the former Director of the Women's Center at Harvard University. Gina is a respected thought leader and was invited to speak at the 2016 Texas Conference for Women and the 2015 SXSW Interactive Conference. 

Gina returned to NumFOCUS this week to resume her leadership of our communications efforts. Thanks in part to generous support from the Moore Foundation, Gina will also serve as our Program Manager for Diversity & Inclusion. "I am thrilled to be rejoining the NumFOCUS team. I love working with our supporters and look forward to helping our open source community continue to grow and develop," she said. ​In other staff-related news, Savannah Mercado has transitioned from her former role as Communications Coordinator to a new position as Events Manager. Congrats, Savannah!
Categories: FLOSS Project Planets

Valuebound: What skills should a Drupal Developer have?

Planet Drupal - Thu, 2017-02-16 11:14

With the ever growing Drupal Community, a beginner is many a times lost in the vast number of resources, with increasing number of developers in Valuebound, I spoke to some of the seasoned developers on their opinion about the skills that a Drupal developer should have and also sifted through tons of materials from Stackoverflow and some more places.
The skillset that we are discussing here will give a clear idea about where you stand, what you know, what you do not know and of course fill me up with what I have missed, and I will quickly add up the suggestions. Before this I have …

Categories: FLOSS Project Planets

Zivtech: Why Performance Best Practices Aren't Speeding Up Your Site

Planet Drupal - Thu, 2017-02-16 09:00

There's no shortage of generic web performance optimization advice out there. You probably have a checklist of things to do to speed up your site. Maybe your hosting company sent you a list of performance best practices. Or maybe you use an automated recommendation service.

You've gone through all the checklist compliance work, but haven't seen any change in your site's speed. What's going on here? You added a CDN. You optimized your image sizes. You removed unused code. You even turned off database logging on your Drupal site and no one can read the logs anymore! But it should be worth it, right? You followed best practice recommendations, so why don't you see an improvement?

Here are three reasons why generic performance checklists don't usually help, and what really does.

1. Micro-Optimizations

Generic performance recommendations don't provide you with a sense of scale for how effective they'll be, so how do you know if they're worth doing?

People will say, "Well, if it's an improvement, it's an improvement, and we should do it. You're not against improvements are you?" This logic only works if you have infinite time or money. You wouldn't spend $2000 to make changes you knew would only save 1ms time on a 10s current page load.

Long performance checklists are usually full of well-meaning suggestions that turn out to be micro-optimizations on your specific site. It makes for an impressive list. We fall for it because it plays into our desire for completion. We think, "ROI be damned! There's an item on this list and we have got to do it."

Just try to remember: ABP. Always Be Prioritizing.

You don't have to tackle every item on the list just for completion's sake. You need to measure optimizations to determine whether you're adding a micro-optimization or slaying a serious bottleneck.

2. Redundant Caching

In security, the advice is to add more layers of protection. In caching, not so much. Adding redundant caching will often have little to no effect on your page load time.

Caching lets your process take a shortcut the majority of the time. Imagine a kid who takes a shortcut on her walk to school. She cuts through her neighbor's backyard instead of going around the block. One in 10,000 times, there's a rabid squirrel in the yard, so she takes the long way. Her entrepreneurial friend offers to sell her a map to a new shortcut. It's a best practice! It cuts off time from the original full route that she almost never uses but it's longer than her usual shortcut. It will save her a little time on rabid squirrel days. Is it worth the price?

The benefit of a redundancy like this is marginal, but if there's a significant time or cost investment it’s probably not worth it. It's better to focus on getting the most bang for your buck. Keep in mind that the time involved to add caching includes making sure that new caches invalidate properly so that your site does not show stale content (and leave your editors calling support to report a problem when their new post does not appear promptly.)

3. Bottlenecks or Bust

Simply speaking, page load time consists of two main layers. First there is the server-side (back-end) which creates the HTML for a web page. Next, the client-side (front-end) renders it, adding the images, CSS, and JavaScript in your web browser.

The first step to effective performance optimization is to determine which layer is slow. It may be both. Developers tend to optimize the layer of their expertise and ignore the other one. It's common to focus efforts on the wrong layer.

Now on the back-end, a lot of the process occurs in series. One thing happens, and then another. First the web server routes a request. Then a PHP function runs. And another. It calls the database with a query. One task completes and then the next one begins. If you decrease the time of any of the tasks, the total time will decrease. If you do enough micro-optimizations, they can actually add up to something perceptible.

But the front-end, generally speaking, is different. The browser tries to do many things all at the same time (in parallel). This changes the game completely.

Imagine you and your brother start a company called Speedy Car Cleaning. You want your customers to get the fastest car cleaning possible, so you decide that while you wash a car, your brother will vacuum at the same time. One step doesn't rely on the other to be completed first, so you'll work in parallel. It takes you five minutes to wash a car, and it takes your brother two minutes to vacuum it. Total time to clean a car? Five minutes. You want to speed things up even more, so your brother buys a more powerful vacuum and now it only takes him one minute. What's the new total time to clean a car?

If you said five minutes, you are correct. When processes run in parallel, the slowest process is the only one that impacts total time.

This is a lesson of factory optimization as well. A factory has many machines and stations running in parallel, so if you speed up the process at any point that is not the bottleneck, you'll have no impact on the throughput. Not a small impact - no impact.

Ok, then what can we do?

So is it worthless to follow best practices to optimize your speed? No. You might get lucky, and those optimizations will make a big impact. They have their place, especially if you have a fairly typical site.

But if you don't see results from following guesses about why your site is slow, there's only one sure way to speed things up.

You have to find out what's really going on.

Establish a benchmark of current performance. Determine which layer contributes the most to the total time. Use profiling tools to find where the big costs are on the back-end and the bottlenecks on the front-end. Measure the effects of your improvements.

If your site's performance is an issue, ask an expert for a performance audit. Make sure they have expertise with your server infrastructure, your CMS or framework, and front-end performance. With methodical analysis and profiling measurements, they'll get to the bottom of it. And don't sweat those 'best practice' checklists.

Categories: FLOSS Project Planets

Ekos Polar Alignment Assistant Tool

Planet KDE - Thu, 2017-02-16 08:45
When setting up a German Equatorial Mount (GEM) for imaging, a critical aspect of capturing long-exposure images is to ensure a proper polar alignment. A GEM mount has two axis: Right Ascension (RA) axis and Declination (DE) axis. Ideally, the RA axis should be aligned with the celestial sphere polar axis. A mount's job is to track the stars motion around the sky, from the moment they rise at the eastern horizon, all the way up across the median, and westward until they set.


In long exposure imaging, a camera is attached to the telescope where the image sensor captures incoming photons from a particular area in the sky. The incident photons have to strike the same photo-site over and over again if we are to gather clear and crisp image. Of course, actual photons do not behave in this way: optics, atmosphere, seeing quality all scatter and refract photons in one way or another. Furthermore, photons do not arrive uniformly but follow a Poisson distribution. For point-like sources like stars, a point spread function describes how photons are spatially distributed across the pixels. Nevertheless, the overall idea we want to keep the source photons hitting the same pixels. Otherwise, we might end up with an image plagued with various trail artifacts.

Since mounts are not perfect, they cannot perfectly keep track of object as it transits across the sky. This can stem from many factors, one of which is the mis-alignment of the mount's Right Ascension axis with respect to the celestial pole axis. Polar alignment removes one of the biggest sources of tracking errors in the mount, but other sources of error still play a factor. If properly aligned, some mounts can track an object for a few minutes with only deviation of 1-2 arcsec RMS.

However, unless you have a fancy top of the line mount, then you'd probably want to use an autoguider to keep the same star locked in the same position over time. Despite all of this, if the axis of the mount is not properly aligned with the celestial pole, then even a mechanically-perfect mount would lose tracking with time. Tracking errors are proportional to the magnitude of the misalignment. It is therefore very important for long exposure imaging to get the mount polar aligned to reduce any residual errors as it spans across the sky.

Several polar-alignment aids exist today, including, but not limited to:

1. Polar scope built-in your mount.
2. Using drift alignment from applications like PHD2.
3. Dedicated hardware like QHY's PoleMaster.
4. Ekos Legacy Polar Alignment tool: You need to take exposure of two different points in the sky to measure the drift and find out polar error in each axis (Altitude & Azimuth)
5. SharpCap Polar Alignment tool.

Out of the above, the easiest to use are probably QHY's PoleMaster and SharpCap's Polar alignment tool. However both software are exclusive to Windows OS only. KStars users have long requested support for an easy to use Polar Alignment helper in Ekos leveraging its astrometry.net backend.

During the last couple of weeks, I worked on developing Ekos Polar Alignment Assistant Tool (PAA). I started with a simple mathematical model consisting of two images rotated by a an arbitrary degree. A sample illustration of this is below:



Given two points, we can calculate the arc length from the rotation angle, and hence the radius. Therefore, it is possible to find two circle solutions that would match this, one of which would be the mount's actual RA axis within the image. Finding out which solution is the correct one turned out to be challenging, and even the mount's own rotation angle cannot be fully trusted. To be able to uniquely draw a circle, you need 3 points. So it was suggested by Gerry Rozema, one of INDI venerable developers, to capture 3 images to uniquely identify the circle without involving a lot of fancy math.

Since it relies on astrometry.net, PAA has more relaxed requirements than other tools making it accessible to more users. You can use your own primary or guide camera, given they have wide-enough FOV for the astrometry solver.

Moreover, the assistant can automatically capture, solve, and even rotate the mount for you. All you have to do is to make the necessary adjustments to your mount.

The new PAA works by capturing and solving three images. It is technically possible to rely on two images only as described above, but three images improves the accuracy of the solution. After capturing each, the mount rotates by a fixed amount and another image is captured and solved.



Since the mount's true RA/DE are resolved by astrometry, we can construct a unique circle from the three centers found in the astrometry solutions. The circle's center is where the mount rotates about (RA Axis) and ideally this point should coincide with the celestial pole. However, if there is a mis-alignment, then Ekos draws a correction vector. This correction vector can be placed anywhere in the image. Next the user refreshes the camera feed and applies correction to the mount's Altitude and Azimuth knobs until the star is located in the designated cross-hair. It's that easy!

Ekos PAA is now in Beta and tests/feedback are highly appreciated.


Categories: FLOSS Project Planets

Agiledrop.com Blog: AGILEDROP: Drupal Logos in Human and Superhuman Forms

Planet Drupal - Thu, 2017-02-16 04:45
Every brand needs a logo. When Dries Buytaert decided to release the software behind drop.org back in 2001, making Drupal an open source project, he needed a symbol too. So, Kristjan Jansen and Steven Wittens joined their powers and stylized a Druplicon, a drop with eyes, curved nose and a mischievous smile. Since then Druplicon has been an indispensable part of the Drupal Community. Druplicon is relatively easy to manage, moderate and share, so people in Drupal Community like to work with it very much. Back in December 2016, when it was Christmas time, we presented you the possibilities of… READ MORE
Categories: FLOSS Project Planets

Catalin George Festila: Compare two images: the histogram method.

Planet Python - Thu, 2017-02-16 03:17
This is a very simple example about how to compare the histograms of both images and print the inconsistencies are bound to arise.
The example come with alternative solution: Histogram method.
The script was run under Fedora 25.
If the images are the same the result will be 0.0.
For testing I change the image2.png by make a line into this with a coverage of 10%.
The result of the script was:
1116.63243729
The images come with this dimensions: 738 x 502 px.
import math
import operator
from math import *
import PIL

from PIL import Image
h1 = Image.open("image1.png").histogram()
h2 = Image.open("image2.png").histogram()

rms = math.sqrt(reduce(operator.add,
map(lambda a,b: (a-b)**2, h1, h2))/len(h1))
print rms
About the operator module exports a set of efficient functions corresponding to the intrinsic operators of Python.
Example:
operator.lt(a, b)
operator.le(a, b)
operator.eq(a, b)
operator.ne(a, b)
operator.ge(a, b)
operator.gt(a, b)
operator.__lt__(a, b)
operator.__le__(a, b)
operator.__eq__(a, b)
operator.__ne__(a, b)
operator.__ge__(a, b)
operator.__gt__(a, b)
This is like math operators:
lt(a, b) is equivalent to a < b
le(a, b) is equivalent to a <= b
Another example:
>>> # Elementwise multiplication
>>> map(mul, [0, 1, 2, 3], [10, 20, 30, 40])
[0, 20, 60, 120]

>>> # Dot product
>>> sum(map(mul, [0, 1, 2, 3], [10, 20, 30, 40]))
200
Categories: FLOSS Project Planets

Craig Sanders: New D&D Cantrip

Planet Debian - Thu, 2017-02-16 03:01

Name: Alternative Fact
Level: 0
School: EN
Time: 1 action
Range: global, contagious
Components: V, S, M (one racial, cultural or religious minority to blame)
Duration: Permanent (irrevocable)
Classes: Cleric, (Grand) Wizard, Con-man Politician

The caster can tell any lie, no matter how absurd or outrageous (in fact, the more outrageous the better), and anyone hearing it (or hearing about it later) with an INT of 10 or less will believe it instantly, with no saving throw. They will defend their new belief to the death – theirs or yours.

This belief can not be disbelieved, nor can it be defeated by any form of education, logic, evidence, or reason. It is completely incurable. Dispel Magic does not work against it, and Remove Curse is also ineffectual.

New D&D Cantrip is a post from: Errata

Categories: FLOSS Project Planets

Carl Trachte: Crude Testing of Equivalent Code With assert

Planet Python - Wed, 2017-02-15 22:40
In engineering and business environments, it is common to have to

1) recreate an equivalent calculation in a different format for a different purpose and check the results against the original calculation.


2) shepherd a calculation process from one vendor system through a transition to another (an upgrade, for example) by hacking a set of provisional scripts together.


3) implement a bunch of linear regressions in calculations.  If I recall correctly, there has been a linear regression functionality in Excel for ages (since the early 90's?); it is the tried and (maybe) true tool of data fitters/forcers everywhere.  Conceivably you could accurately, if not precisely, model just about any curve with enough linear segments.  Mercifully, the ones I show below have only two segments per data set.

This problem embodies all three bullets above.  I've sanitized the code which makes it a little ridiculous, but no less voluminous (sorry).

Here's what we have in the vendor's system - it is Python (2.7) code, but it's run inside special a la carte purchased software that my department doesn't have.  Also, it's full of a bunch of constants that I'm not really comfortable recognizing or maintaining:

"""
Cut and pasted formulas from vendor
specific GUI/Python API.
"""

# LOC1
def loc1fromvendor(CONTROL1,
                   CONTROL2,
                   x):
    """
    Loc1 y calculation from vendor.

    CONTROL1 is the primary code (integer
    or round digit float).
    CONTROL2 is the secondary code (integer
    or round digit float).
    x is the x-axis input.  (float).

    Returns float.
    """
    DEFAULTY = 2.50
   
    if CONTROL1 == 9:
            if CONTROL2 == 1:
                if x > 1.275:
                    Y = (-0.0003 * x) + 6.4781
                else:
                    Y = 2.53
            else:
                Y = 2.54
    elif CONTROL1 == 8:
            Y = 2.6
    elif CONTROL1 == 7:
            if CONTROL2 == 1:
                if x > 1.315:
                    Y = -0.003 * x + 6.548
                else:
                    Y = 2.6
            else:
                Y = -0.0031 * x + 2.958
    elif CONTROL1 == 6:
            if CONTROL2 == 1:
                if x >1.310:
                     Y = -0.0018 * x + 4.9307
                else:
                    Y = 2.57
            else:
                Y = -0.0004 * x + 3.0612
    elif CONTROL1 == 5:
            if CONTROL2 == 1:
                if x >1.250:
                    Y = -0.0026 * x + 5.7152
                else:
                    Y = 2.47
            else:
                Y = -0.0003 * x + 2.8733
    elif CONTROL1 == 4:
            if CONTROL2 == 1:
                if x >1.290:
                    Y = -0.0032 * x + 6.7257
                else:
                    Y = 2.6
            else:
                Y = -0.0002 * x + 2.8215
    elif CONTROL1 == 1:
            if CONTROL2 == 1:
                Y = 2.35
            else:
                Y = 2.45
    else:
            Y = DEFAULTY
    return Y

# LOC2
def loc2fromvendor(CONTROL1,
                   CONTROL2,
                   x):
    """
    Loc2 y calculation from vendor.

    CONTROL1 is the primary code (integer
    or round digit float).
    CONTROL2 is the secondary code (integer
    or round digit float).
    x is the x-axis input.  (float).

    Returns float.
    """
    DEFAULTY = 2.50
   
    if CONTROL1 == 9:
        if CONTROL2 == 1:
                Y = -0.0006 * x + 3.3121
        else:
                Y = -0.0006 * x + 3.3121
    elif CONTROL1 == 8:
            if CONTROL2 == 1:
                if x >1.050:
                    Y = 2.65
                else:
                    Y = 2.65
            else:
                if x >1.050:
                    Y = 2.65
                else:
                    Y = 2.65
    elif CONTROL1 == 7:
            if CONTROL2 == 1:
                if x > 1.050:
                    Y = -0.0012 * x + 3.886
                else:
                    Y = -0.0012 * x + 3.886
            else:
                if x > 1.050:
                    Y = -0.00007 * x + 2.6787
                else:
                    Y = -0.00007 * x + 2.6787
    elif CONTROL1 == 6:
            if CONTROL2 == 1:
                if x >1.050:
                    Y = -0.001 * x + 3.731
                else:
                    Y = -0.001 * x + 3.731
            else:
                if x >1.050:
                    Y = -0.0012 * x + 4.0757
                else:
                    Y = -0.0012 * x + 4.0757
    elif CONTROL1 == 5:
            if CONTROL2 == 1:
                if x >1.050:
                    Y = 2.1
                else:
                    Y = 2.1
            else:
                if x >1.050:
                    Y = -0.0003 * x + 2.9564
                else:
                    Y = -0.0003 * x + 2.9564
    elif CONTROL1 == 4:
            if CONTROL2 == 1:
                if x >1.050:
                    Y = -0.000009 * x + 2.1972
                else:
                    Y = -0.000009 *x + 2.1972
            else:
                if x >1.050:
                    Y = -0.0005 * x + 3.2461
                else:
                    Y = -0.0005 * x + 3.2461               
    elif CONTROL1 == 1:
            if CONTROL2 == 1:
                Y = -0.001 * x + 3.7257
            else:
                Y = -0.001 * x + 3.7257
    else:
            Y = DEFAULTY
    return Y

# LOC3
def loc3fromvendor(CONTROL1,
                   CONTROL2,
                   x):
    """
    Loc3 y calculation from vendor.

    CONTROL1 is the primary code (integer
    or round digit float).
    CONTROL2 is the secondary code (integer
    or round digit float).
    x is the x-axis input.  (float).

    Returns float.
    """
    DEFAULTY = 2.50
   
    if CONTROL1 == 9:
            Y = 2.49
    elif CONTROL1 == 8:
            if x > 1.000:
                Y = -0.0006 * x + 3.3291
            else:
                Y = 2.64
    elif CONTROL1 == 7:
            if x > 1.050:
                Y = -0.0009 * x + 3.5929
            else:
                Y = 2.67
    elif CONTROL1 == 6:
            if x > 1.080:
                Y = -0.0013 * x + 4.0665
            else:
                # Debug.
                # print 'x in vendor function = {:f}'.format(x)
                Y = 2.65
    elif CONTROL1 == 5:
            if x > 950:
                Y = -0.001 * x + 3.4996
            else:
                Y = 2.59
    elif CONTROL1 == 4:
            if x > 1.100:
                Y = -0.0018 * x + 4.6690
            else:
                Y = 2.68
    elif CONTROL1 == 1:
            if x > 1.000:
                Y = -0.0004 * x + 2.8857
            else:
                Y = 2.49
    else:
            Y = DEFAULTY
    return Y

# LOC4
def loc4fromvendor(CONTROL1,
                   CONTROL2,
                   x):
    """
    Loc4 y calculation from vendor.

    CONTROL1 is the primary code (integer
    or round digit float).
    CONTROL2 is the secondary code (integer
    or round digit float).
    x is the x-axis input.  (float).

    Returns float.
    """
    DEFAULTY = 2.50
   
    if CONTROL1 == 9:
        Y = -0.0000008 * x + 2.6761
    elif CONTROL1 == 8:
            Y = -0.000003 * x + 2.6975
    elif CONTROL1 == 7:
            if CONTROL2 == 1:
                if x > 1.000:
                    Y = -0.0018 * x + 4.3902
                else:
                    Y = 2.60
            else:
                Y = -0.00009 * x + 2.7334
    elif CONTROL1 == 6:
            if CONTROL2 == 1:
                if x > 1.100:
                     Y = -0.0013 * x + 4.0322
                else:
                    Y = 2.58
            else:
                Y = -0.0002 * x + 2.8081
    elif CONTROL1 == 5:
            if CONTROL2 == 1:
                Y = -0.0018 * x + 4.2758
            else:
                Y = -0.0001 * x + 2.6535
    elif CONTROL1 == 4:
            if CONTROL2 == 1:
                if x > 1.000:
                    Y = -0.002 * x + 4.5548
                else:
                    Y = 2.60
            else:
                if x > 1125:
                    Y = -0.0011 * x + 3.9184
                else:
                    Y = 2.65
    elif CONTROL1 == 1:
            Y = -0.0003 * x + 2.7802
    else:
            Y = DEFAULTY
    return Y


My code is less multiple function based and more a single function with a bunch of lookup dictionaries rolled into one big dictionary.  I'm not arguing my approach is necessarily better.  For instance, I implemented my x variable ranges with lower bounds based on the precision of my data.  This isn't very portable.

The need to lock down my results to keep them in line with the original led me to use of the assert statement and the writing of a little walk of my dictionary against my function and the vendor's.  This way, when I get a new "vendor function" (actually a snippet of code for a particular location or area) I can paste it into this crude ersatz test suite and see what needs changing.

I caught a few missed decimal places, typos, transposed digits, and plain old omissions in my code using this approach.  It is possible I've gone overboard with constants.  I don't care.  I have to read them and the only way I can keep them straight is by lining up the decimal places and locking them down as named constants (programmatically they are variables, but I'm not changing them).

"But why don't they and why don't you use scientific notation?"

As we used to say in the Navy years ago, "There is the right way, there is the wrong way, and the Navy way."  Guess which one the vendor uses?  Onward.

Here's my code with the "test" of equivalency for the two approaches:

"""
Attempt at generic script to process linear regressions
for multiple areas.
"""

import sys

import vendorformulas as vfx

# Loc abbreviations.
LOC1 = 'loc1'
LOC2 = 'loc2'
LOC3 = 'loc3'
LOC4 = 'loc4'

DEFAULTY = 2.50

CTL2ONE = 1
CTL2TWO = 2

BIGX = 5000.0
LITTLEX = 0.0

TYPE9 = 9
TYPE8 = 8
TYPE7 = 7
TYPE6 = 6
TYPE5 = 5
TYPE4 = 4
TYPE1 = 1
# Undefined control1 type for default for each loc.
UNDEF = 99

slope = 'm'
b = 'b'

# Compute y using formula (y = mx + b), control1, x, control2
# nested dictionaries
#     control2
#         x range
#             m
#             b
# Original logic gives unassigned CONTROL2 block to CTL2TWO interpretation
# Honor this in logic in program.

# Slope values.
NOSLOPE = 0.0

NEG0032000 = -0.0032000
NEG0031000 = -0.0031000
NEG0030000 = -0.0030000
NEG0026000 = -0.0026000
NEG0020000 = -0.0020000
NEG0018000 = -0.0018000
NEG0013000 = -0.0013000
NEG0012000 = -0.0012000
NEG0011000 = -0.0011000
NEG0010000 = -0.0010000
NEG0009000 = -0.0009000
NEG0006000 = -0.0006000
NEG0005000 = -0.0005000
NEG0004000 = -0.0004000
NEG0003000 = -0.0003000
NEG0002000 = -0.0002000
NEG0001000 = -0.0001000
NEG0000900 = -0.0000900
NEG0000700 = -0.0000700
NEG0000090 = -0.0000090
NEG0000030 = -0.0000030
NEG0000008 = -0.0000008

# Intercept values.
T2PT1000 = 2.1000
T2PT1972 = 2.1972
T2PT3500 = 2.3500
T2PT4500 = 2.4500
T2PT4700 = 2.4700
T2PT4900 = 2.4900
T2PT5300 = 2.5300
T2PT5400 = 2.5400
T2PT5700 = 2.5700
T2PT5800 = 2.5800
T2PT5900 = 2.5900
T2PT6000 = 2.6000
T2PT6400 = 2.6400
T2PT6500 = 2.6500
T2PT6535 = 2.6535
T2PT6700 = 2.6700
T2PT6761 = 2.6761
T2PT6787 = 2.6787
T2PT6800 = 2.6800
T2PT6975 = 2.6975
T2PT7334 = 2.7334
T2PT7802 = 2.7802
T2PT8081 = 2.8081
T2PT8215 = 2.8215
T2PT8733 = 2.8733
T2PT8857 = 2.8857
T2PT9564 = 2.9564
T2PT9580 = 2.9580
T3PT0612 = 3.0612
T3PT2461 = 3.2461
T3PT3121 = 3.3121
T3PT3291 = 3.3291
T3PT4996 = 3.4996
T3PT5929 = 3.5929
T3PT7257 = 3.7257
T3PT7310 = 3.7310
T3PT8860 = 3.8860
T3PT9184 = 3.9184
F4PT0322 = 4.0322
F4PT0665 = 4.0665
F4PT0757 = 4.0757
F4PT2758 = 4.2758
F4PT3902 = 4.3902
F4PT5548 = 4.5548
F4PT6690 = 4.6690
F4PT9307 = 4.9307
F5PT7152 = 5.7152
S6PT4781 = 6.4781
S6PT5480 = 6.5480
S6PT7257 = 6.7257

LOC1YS = {TYPE9:
             {CTL2ONE:
                 {(LITTLEX, 1.27500):
                     {slope:NOSLOPE, b:T2PT5300},
                  (1.27501, BIGX):
                     {slope:NEG0003000, b:S6PT4781}},
              CTL2TWO:
                 {(LITTLEX, BIGX):
                     {slope:NOSLOPE, b:T2PT5400}}},
          TYPE8:
             {CTL2ONE:
                 {(LITTLEX, BIGX):
                     {slope:NOSLOPE, b:T2PT6000}},
              CTL2TWO:
                 {(LITTLEX, BIGX):
                     {slope:NOSLOPE, b:T2PT6000}}},
          TYPE7:
             {CTL2ONE:
                 {(LITTLEX, 1.31500):
                     {slope:NOSLOPE, b:T2PT6000},
                  (1.31501, BIGX):
                     {slope:NEG0030000, b:S6PT5480}},
              CTL2TWO:
                 {(LITTLEX, BIGX):
                     {slope:NEG0031000, b:T2PT9580}}},
          TYPE6:
             {CTL2ONE:
                 {(LITTLEX, 1.31000):
                     {slope:NOSLOPE, b:T2PT5700},
                  (1.31001, BIGX):
                     {slope:NEG0018000, b:F4PT9307}},
              CTL2TWO:
                 {(LITTLEX, BIGX):
                     {slope:NEG0004000, b:T3PT0612}}},
          TYPE5:
             {CTL2ONE:
                 {(LITTLEX, 1.25000):
                     {slope:NOSLOPE, b:T2PT4700},
                  (1.25001, BIGX):
                     {slope:NEG0026000, b:F5PT7152}},
              CTL2TWO:
                 {(LITTLEX, BIGX):
                     {slope:NEG0003000, b:T2PT8733}}},
          TYPE4:
             {CTL2ONE:
                 {(LITTLEX, 1.29000):
                     {slope:NOSLOPE, b:T2PT6000},
                  (1.29001, BIGX):
                     {slope:NEG0032000, b:S6PT7257}},
              CTL2TWO:
                 {(LITTLEX, BIGX):
                     {slope:NEG0002000, b:T2PT8215}}},
          TYPE1:
             {CTL2ONE:
                 {(LITTLEX, BIGX):
                     {slope:NOSLOPE, b:T2PT3500}},
              CTL2TWO:
                 {(LITTLEX, BIGX):
                     {slope:NOSLOPE, b:T2PT4500}}},
          UNDEF:
             {CTL2ONE:
                 {(LITTLEX, BIGX):
                     {slope:NOSLOPE, b:DEFAULTY}},
              CTL2TWO:
                 {(LITTLEX, BIGX):
                     {slope:NOSLOPE, b:DEFAULTY}}}}
# END LOC1

# LOC2
LOC2YS = {TYPE9:
             {CTL2ONE:
                 {(LITTLEX, BIGX):
                     {slope:NEG0006000, b:T3PT3121}},
              CTL2TWO:
                 {(LITTLEX, BIGX):
                     {slope:NEG0006000, b:T3PT3121}}},
          TYPE8:
              {CTL2ONE:
                  {(LITTLEX, BIGX):
                      {slope:NOSLOPE, b:T2PT6500}},
               CTL2TWO:
                  {(LITTLEX, BIGX):
                      {slope:NOSLOPE, b:T2PT6500}}},
          TYPE7:{CTL2ONE:
                  {(LITTLEX, BIGX):
                      {slope:NEG0012000, b:T3PT8860}},
               CTL2TWO:
                  {(LITTLEX, BIGX):
                      {slope:NEG0000700, b:T2PT6787}}},
          TYPE6:
             {CTL2ONE:
                 {(LITTLEX, BIGX):
                     {slope:NEG0010000, b:T3PT7310}},
              CTL2TWO:
                 {(LITTLEX, BIGX):
                     {slope:NEG0012000, b:F4PT0757}}},
          TYPE5:
             {CTL2ONE:
                 {(LITTLEX, BIGX):
                     {slope:NOSLOPE, b:T2PT1000}},
              CTL2TWO:
                 {(LITTLEX, BIGX):
                     {slope:NEG0003000, b:T2PT9564}}},
          TYPE4:
             {CTL2ONE:
                 {(LITTLEX, BIGX):
                     {slope:NEG0000090, b:T2PT1972}},
              CTL2TWO:
                 {(LITTLEX, BIGX):
                     {slope:NEG0005000, b:T3PT2461}}},
          TYPE1:
             {CTL2ONE:
                 {(LITTLEX, BIGX):
                     {slope:NEG0010000, b:T3PT7257}},
              CTL2TWO:
                 {(LITTLEX, BIGX):
                     {slope:NEG0010000, b:T3PT7257}}},
          UNDEF:
             {CTL2ONE:
                 {(LITTLEX, BIGX):
                     {slope:NOSLOPE, b:DEFAULTY}},
              CTL2TWO:
                 {(LITTLEX, BIGX):
                     {slope:NOSLOPE, b:DEFAULTY}}}}
# END LOC2

LOC3YS = {TYPE9:
             {CTL2ONE:
                 {(LITTLEX, BIGX):
                     {slope:NOSLOPE, b:T2PT4900}},
              CTL2TWO:
                 {(LITTLEX, BIGX):
                     {slope:NOSLOPE, b:T2PT4900}}},
          TYPE8:{CTL2ONE:
                  {(LITTLEX, 1.00000):
                      {slope:NOSLOPE, b:T2PT6400},
                   (1.00001, BIGX):
                      {slope:NEG0006000, b:T3PT3291}},
               CTL2TWO:
                  {(LITTLEX, 1.00000):
                      {slope:NOSLOPE, b:T2PT6400},
                   (1.00001, BIGX):
                      {slope:NEG0006000, b:T3PT3291}}},
          TYPE7:{CTL2ONE:
                  {(LITTLEX, 1.05000):
                      {slope:NOSLOPE, b:T2PT6700},
                   (1.05001, BIGX):
                      {slope:NEG0009000, b:T3PT5929}},
               CTL2TWO:
                  {(LITTLEX, 1.05000):
                      {slope:NOSLOPE, b:T2PT6700},
                   (1.050001, BIGX):
                      {slope:NEG0009000, b:T3PT5929}}},
          TYPE6:
             {CTL2ONE:
                 {(LITTLEX, 1.08000):
                     {slope:NOSLOPE, b:T2PT6500},
                  (1.08001, BIGX):
                     {slope:NEG0013000, b:F4PT0665}},
              CTL2TWO:
                 {(LITTLEX, 1.08000):
                     {slope:NOSLOPE, b:T2PT6500},
                  (1.08001, BIGX):
                     {slope:NEG0013000, b:F4PT0665}}},
          TYPE5:
             {CTL2ONE:
                 {(LITTLEX, 950.0):
                     {slope:NOSLOPE, b:T2PT5900},
                  (950.01, BIGX):
                     {slope:NEG0010000, b:T3PT4996}},
              CTL2TWO:
                 {(LITTLEX, 950.0):
                     {slope:NOSLOPE, b:T2PT5900},
                  (950.01, BIGX):
                     {slope:NEG0010000, b:T3PT4996}}},
          TYPE4:
             {CTL2ONE:
                 {(LITTLEX, 1.10000):
                     {slope:NOSLOPE, b:T2PT6800},
                  (1.10001, BIGX):
                     {slope:NEG0018000, b:F4PT6690}},
              CTL2TWO:
                 {(LITTLEX, 1.10000):
                     {slope:NOSLOPE, b:T2PT6800},
                  (1.10001, BIGX):
                     {slope:NEG0018000, b:F4PT6690}}},
          TYPE1:
             {CTL2ONE:
                 {(LITTLEX, 1.00000):
                     {slope:NOSLOPE, b:T2PT4900},
                  (1.00001, BIGX):
                     {slope:NEG0004000, b:T2PT8857}},
              CTL2TWO:
                 {(LITTLEX, 1.00000):
                     {slope:NOSLOPE, b:T2PT4900},
                  (1.00001, BIGX):
                     {slope:NEG0004000, b:T2PT8857}}},
          UNDEF:
             {CTL2ONE:
                 {(LITTLEX, BIGX):
                     {slope:NOSLOPE, b:DEFAULTY}},
              CTL2TWO:
                 {(LITTLEX, BIGX):
                     {slope:NOSLOPE, b:DEFAULTY}}}}
# END LOC3

# LOC4
LOC4YS = {TYPE9:
             {CTL2ONE:
                 {(LITTLEX, BIGX):
                     {slope:NEG0000008, b:T2PT6761}},
              CTL2TWO:
                 {(LITTLEX, BIGX):
                     {slope:NEG0000008, b:T2PT6761}}},
          TYPE8:{CTL2ONE:
                  {(LITTLEX, BIGX):
                      {slope:NEG0000030, b:T2PT6975}},
               CTL2TWO:
                  {(LITTLEX, BIGX):
                      {slope:NEG0000030, b:T2PT6975}}},
          TYPE7:
             {CTL2ONE:
                 {(LITTLEX, 1.00000):
                     {slope:NOSLOPE, b:T2PT6000},
                  (1.00001, BIGX):
                     {slope:NEG0018000, b:F4PT3902}},
              CTL2TWO:
                 {(LITTLEX, BIGX):
                     {slope:NEG0000900, b:T2PT7334}}},
          TYPE6:
             {CTL2ONE:
                 {(LITTLEX, 1.10000):
                     {slope:NOSLOPE, b:T2PT5800},
                  (1.10001, BIGX):
                     {slope:NEG0013000, b:F4PT0322}},
              CTL2TWO:
                 {(LITTLEX, BIGX):
                     {slope:NEG0002000, b:T2PT8081}}},
          TYPE5:
             {CTL2ONE:
                 {(LITTLEX, BIGX):
                     {slope:NEG0018000, b:F4PT2758}},
              CTL2TWO:
                 {(LITTLEX, BIGX):
                     {slope:NEG0001000, b:T2PT6535}}},
          TYPE4:
             {CTL2ONE:
                 {(LITTLEX, 1.00000):
                     {slope:NOSLOPE, b:T2PT6000},
                  (1.00001, BIGX):
                     {slope:NEG0020000, b:F4PT5548}},
              CTL2TWO:
                 {(LITTLEX, 1125.0):
                     {slope:NOSLOPE, b:T2PT6500},
                  (1125.01, BIGX):
                     {slope:NEG0011000, b:T3PT9184}}},
          TYPE1:
             {CTL2ONE:
                 {(LITTLEX, BIGX):
                     {slope:NEG0003000, b:T2PT7802}},
              CTL2TWO:
                 {(LITTLEX, BIGX):
                     {slope:NEG0003000, b:T2PT7802}}},
          UNDEF:
             {CTL2ONE:
                 {(LITTLEX, BIGX):
                     {slope:NOSLOPE, b:DEFAULTY}},
              CTL2TWO:
                 {(LITTLEX, BIGX):
                     {slope:NOSLOPE, b:DEFAULTY}}}}
# END LOC4

YS = {LOC1:LOC1YS,
      LOC2:LOC2YS,
      LOC3:LOC3YS,
      LOC4:LOC4YS}

VALIDCONTROL1 = [TYPE9, TYPE8, TYPE7, TYPE6, TYPE5, TYPE4, TYPE1]

RETURNDEFAULTMSG = 'Returning default Y for  {0:s}, {1:2.0f}, {2:2.0f}, {3:8.5f} . .  .'
TESTINGMSG = 'Testing dictionary based y == function based y for {0:s}, {1:d}, {2:d}, {3:8.5f} . .  .'
ASSERTIONERRORMSG = 'Assertion Error for {0:s}, {1:f}, {2:d}, {3:8.5f} . . .'

def gety(loc, control1, x, control2):
    """
    y calculation for y = mx + b.

    loc is the four letter loc abbreviation (loc1).

    control1 is the integer CONTROL1 code.

    x is a float for the x component of y = mx + b.

    control2 is the integer CONTROL2 code.
    """
    # Compute y using formula (y = mx + b), control1, x, control2.
    # Match loc.
    ydictionary = YS[loc]
    # Check if control1 code belongs to recognized types.
    if control1 in VALIDCONTROL1:
        # Match control1.
        for control2x in ydictionary[control1]:
            # match control2.
            for xrangex in ydictionary[control1][control2]:
                # match x range.
                if (x >= xrangex[0] and
                    x <= xrangex[1] and control2x == control2):
                    mxb = ydictionary[control1][control2][xrangex]
                    y = mxb[slope] * x + mxb[b]
                    return y
        # Possible that control2 not defined;
        # Defaults to CONTROL2TWO.
        for xrangex in ydictionary[control1][CTL2TWO]:
            # match elevation range.
            if (x >= xrangex[0] and
                x <= xrangex[1]):
                mxb = ydictionary[control1][CTL2TWO][xrangex]
                y = mxb[slope] * x + mxb[b]
                return y
    # Doesn't matter if CTL2TWO or CTL2ONE or undefined
    #     - default for loc will always be [UNDEF][CTL2TWO].
    print RETURNDEFAULTMSG.format(loc, control1, control2, x)
    return ydictionary[UNDEF][CTL2TWO][(LITTLEX, BIGX)][b]

# TEST Calculations.
TESTFUNCS = {LOC1:vfx.loc1fromvendor,
             LOC2:vfx.loc2fromvendor,
             LOC3:vfx.loc3fromvendor,
             LOC4:vfx.loc4fromvendor}

for locx in YS:
    for control1 in YS[locx]:
        for control2 in YS[locx][control1]:
            for xrangex in YS[locx][control1][control2]:
                for z in xrangex:
                    dictionarybasedy = gety(locx, control1, z, control2)
                    functionbasedy = TESTFUNCS[locx](control1, control2, z)
                    print TESTINGMSG.format(locx, control1, control2, z)
                    print 'dictionarybasedy = {0:8.7f}'.format(dictionarybasedy)
                    print 'functionbasedy = {0:8.7f}'.format(functionbasedy)
                    try:
                        assert dictionarybasedy == functionbasedy
                    except AssertionError:
                        print ASSERTIONERRORMSG.format(locx, control1, control2, z)
                        sys.exit()







And the output:


Testing dictionary based y == function based y for loc2, 1, 1,  0.00000 . .  .
dictionarybasedy = 3.7257000
functionbasedy = 3.7257000
Testing dictionary based y == function based y for loc2, 1, 1, 5000.00000 . .  .
dictionarybasedy = -1.2743000
functionbasedy = -1.2743000
Testing dictionary based y == function based y for loc2, 1, 2,  0.00000 . .  .
dictionarybasedy = 3.7257000
functionbasedy = 3.7257000
Testing dictionary based y == function based y for loc2, 1, 2, 5000.00000 . .  .
dictionarybasedy = -1.2743000
functionbasedy = -1.2743000
Returning default Y for  loc2, 99,  1,  0.00000 . .  .
Testing dictionary based y == function based y for loc2, 99, 1,  0.00000 . .  .
dictionarybasedy = 2.5000000
functionbasedy = 2.5000000
Returning default Y for  loc2, 99,  1, 5000.00000 . .  .
Testing dictionary based y == function based y for loc2, 99, 1, 5000.00000 . .  .
dictionarybasedy = 2.5000000
functionbasedy = 2.5000000
Returning default Y for  loc2, 99,  2,  0.00000 . .  .
Testing dictionary based y == function based y for loc2, 99, 2,  0.00000 . .  .
dictionarybasedy = 2.5000000
functionbasedy = 2.5000000
Returning default Y for  loc2, 99,  2, 5000.00000 . .  .
Testing dictionary based y == function based y for loc2, 99, 2, 5000.00000 . .  .
dictionarybasedy = 2.5000000
functionbasedy = 2.5000000
Testing dictionary based y == function based y for loc2, 4, 1,  0.00000 . .  .
dictionarybasedy = 2.1972000
functionbasedy = 2.1972000
Testing dictionary based y == function based y for loc2, 4, 1, 5000.00000 . .  .
dictionarybasedy = 2.1522000
functionbasedy = 2.1522000
Testing dictionary based y == function based y for loc2, 4, 2,  0.00000 . .  .
dictionarybasedy = 3.2461000
functionbasedy = 3.2461000
Testing dictionary based y == function based y for loc2, 4, 2, 5000.00000 . .  .
dictionarybasedy = 0.7461000
functionbasedy = 0.7461000
Testing dictionary based y == function based y for loc2, 5, 1,  0.00000 . .  .
dictionarybasedy = 2.1000000
functionbasedy = 2.1000000
Testing dictionary based y == function based y for loc2, 5, 1, 5000.00000 . .  .
dictionarybasedy = 2.1000000
functionbasedy = 2.1000000
Testing dictionary based y == function based y for loc2, 5, 2,  0.00000 . .  .
dictionarybasedy = 2.9564000
functionbasedy = 2.9564000
Testing dictionary based y == function based y for loc2, 5, 2, 5000.00000 . .  .
dictionarybasedy = 1.4564000
functionbasedy = 1.4564000
Testing dictionary based y == function based y for loc2, 6, 1,  0.00000 . .  .
dictionarybasedy = 3.7310000
functionbasedy = 3.7310000
Testing dictionary based y == function based y for loc2, 6, 1, 5000.00000 . .  .
dictionarybasedy = -1.2690000
functionbasedy = -1.2690000
Testing dictionary based y == function based y for loc2, 6, 2,  0.00000 . .  .
dictionarybasedy = 4.0757000
functionbasedy = 4.0757000
Testing dictionary based y == function based y for loc2, 6, 2, 5000.00000 . .  .
dictionarybasedy = -1.9243000
functionbasedy = -1.9243000
Testing dictionary based y == function based y for loc2, 7, 1,  0.00000 . .  .
dictionarybasedy = 3.8860000
functionbasedy = 3.8860000
Testing dictionary based y == function based y for loc2, 7, 1, 5000.00000 . .  .
dictionarybasedy = -2.1140000
functionbasedy = -2.1140000
Testing dictionary based y == function based y for loc2, 7, 2,  0.00000 . .  .
dictionarybasedy = 2.6787000
functionbasedy = 2.6787000
Testing dictionary based y == function based y for loc2, 7, 2, 5000.00000 . .  .
dictionarybasedy = 2.3287000
functionbasedy = 2.3287000
Testing dictionary based y == function based y for loc2, 8, 1,  0.00000 . .  .
dictionarybasedy = 2.6500000
functionbasedy = 2.6500000
Testing dictionary based y == function based y for loc2, 8, 1, 5000.00000 . .  .
dictionarybasedy = 2.6500000
functionbasedy = 2.6500000
Testing dictionary based y == function based y for loc2, 8, 2,  0.00000 . .  .
dictionarybasedy = 2.6500000
functionbasedy = 2.6500000
Testing dictionary based y == function based y for loc2, 8, 2, 5000.00000 . .  .
dictionarybasedy = 2.6500000
functionbasedy = 2.6500000
Testing dictionary based y == function based y for loc2, 9, 1,  0.00000 . .  .
dictionarybasedy = 3.3121000
functionbasedy = 3.3121000
Testing dictionary based y == function based y for loc2, 9, 1, 5000.00000 . .  .
dictionarybasedy = 0.3121000
functionbasedy = 0.3121000
Testing dictionary based y == function based y for loc2, 9, 2,  0.00000 . .  .
dictionarybasedy = 3.3121000
functionbasedy = 3.3121000
Testing dictionary based y == function based y for loc2, 9, 2, 5000.00000 . .  .
dictionarybasedy = 0.3121000
functionbasedy = 0.3121000
Testing dictionary based y == function based y for loc3, 1, 1,  0.00000 . .  .
dictionarybasedy = 2.4900000
functionbasedy = 2.4900000
Testing dictionary based y == function based y for loc3, 1, 1,  1.00000 . .  .
dictionarybasedy = 2.4900000
functionbasedy = 2.4900000
Testing dictionary based y == function based y for loc3, 1, 1,  1.00001 . .  .
dictionarybasedy = 2.8853000
functionbasedy = 2.8853000
Testing dictionary based y == function based y for loc3, 1, 1, 5000.00000 . .  .
dictionarybasedy = 0.8857000
functionbasedy = 0.8857000
Testing dictionary based y == function based y for loc3, 1, 2,  0.00000 . .  .
dictionarybasedy = 2.4900000
functionbasedy = 2.4900000
Testing dictionary based y == function based y for loc3, 1, 2,  1.00000 . .  .
dictionarybasedy = 2.4900000
functionbasedy = 2.4900000
Testing dictionary based y == function based y for loc3, 1, 2,  1.00001 . .  .
dictionarybasedy = 2.8853000
functionbasedy = 2.8853000
Testing dictionary based y == function based y for loc3, 1, 2, 5000.00000 . .  .
dictionarybasedy = 0.8857000
functionbasedy = 0.8857000
Returning default Y for  loc3, 99,  1,  0.00000 . .  .
Testing dictionary based y == function based y for loc3, 99, 1,  0.00000 . .  .
dictionarybasedy = 2.5000000
functionbasedy = 2.5000000
Returning default Y for  loc3, 99,  1, 5000.00000 . .  .
Testing dictionary based y == function based y for loc3, 99, 1, 5000.00000 . .  .
dictionarybasedy = 2.5000000
functionbasedy = 2.5000000
Returning default Y for  loc3, 99,  2,  0.00000 . .  .
Testing dictionary based y == function based y for loc3, 99, 2,  0.00000 . .  .
dictionarybasedy = 2.5000000
functionbasedy = 2.5000000
Returning default Y for  loc3, 99,  2, 5000.00000 . .  .
Testing dictionary based y == function based y for loc3, 99, 2, 5000.00000 . .  .
dictionarybasedy = 2.5000000
functionbasedy = 2.5000000
Testing dictionary based y == function based y for loc3, 4, 1,  0.00000 . .  .
dictionarybasedy = 2.6800000
functionbasedy = 2.6800000
Testing dictionary based y == function based y for loc3, 4, 1,  1.10000 . .  .
dictionarybasedy = 2.6800000
functionbasedy = 2.6800000
Testing dictionary based y == function based y for loc3, 4, 1,  1.10001 . .  .
dictionarybasedy = 4.6670200
functionbasedy = 4.6670200
Testing dictionary based y == function based y for loc3, 4, 1, 5000.00000 . .  .
dictionarybasedy = -4.3310000
functionbasedy = -4.3310000
Testing dictionary based y == function based y for loc3, 4, 2,  0.00000 . .  .
dictionarybasedy = 2.6800000
functionbasedy = 2.6800000
Testing dictionary based y == function based y for loc3, 4, 2,  1.10000 . .  .
dictionarybasedy = 2.6800000
functionbasedy = 2.6800000
Testing dictionary based y == function based y for loc3, 4, 2,  1.10001 . .  .
dictionarybasedy = 4.6670200
functionbasedy = 4.6670200
Testing dictionary based y == function based y for loc3, 4, 2, 5000.00000 . .  .
dictionarybasedy = -4.3310000
functionbasedy = -4.3310000
Testing dictionary based y == function based y for loc3, 5, 1, 950.01000 . .  .
dictionarybasedy = 2.5495900
functionbasedy = 2.5495900
Testing dictionary based y == function based y for loc3, 5, 1, 5000.00000 . .  .
dictionarybasedy = -1.5004000
functionbasedy = -1.5004000
Testing dictionary based y == function based y for loc3, 5, 1,  0.00000 . .  .
dictionarybasedy = 2.5900000
functionbasedy = 2.5900000
Testing dictionary based y == function based y for loc3, 5, 1, 950.00000 . .  .
dictionarybasedy = 2.5900000
functionbasedy = 2.5900000
Testing dictionary based y == function based y for loc3, 5, 2, 950.01000 . .  .
dictionarybasedy = 2.5495900
functionbasedy = 2.5495900
Testing dictionary based y == function based y for loc3, 5, 2, 5000.00000 . .  .
dictionarybasedy = -1.5004000
functionbasedy = -1.5004000
Testing dictionary based y == function based y for loc3, 5, 2,  0.00000 . .  .
dictionarybasedy = 2.5900000
functionbasedy = 2.5900000
Testing dictionary based y == function based y for loc3, 5, 2, 950.00000 . .  .
dictionarybasedy = 2.5900000
functionbasedy = 2.5900000
Testing dictionary based y == function based y for loc3, 6, 1,  0.00000 . .  .
dictionarybasedy = 2.6500000
functionbasedy = 2.6500000
Testing dictionary based y == function based y for loc3, 6, 1,  1.08000 . .  .
dictionarybasedy = 2.6500000
functionbasedy = 2.6500000
Testing dictionary based y == function based y for loc3, 6, 1,  1.08001 . .  .
dictionarybasedy = 4.0650960
functionbasedy = 4.0650960
Testing dictionary based y == function based y for loc3, 6, 1, 5000.00000 . .  .
dictionarybasedy = -2.4335000
functionbasedy = -2.4335000
Testing dictionary based y == function based y for loc3, 6, 2,  0.00000 . .  .
dictionarybasedy = 2.6500000
functionbasedy = 2.6500000
Testing dictionary based y == function based y for loc3, 6, 2,  1.08000 . .  .
dictionarybasedy = 2.6500000
functionbasedy = 2.6500000
Testing dictionary based y == function based y for loc3, 6, 2,  1.08001 . .  .
dictionarybasedy = 4.0650960
functionbasedy = 4.0650960
Testing dictionary based y == function based y for loc3, 6, 2, 5000.00000 . .  .
dictionarybasedy = -2.4335000
functionbasedy = -2.4335000
Testing dictionary based y == function based y for loc3, 7, 1,  0.00000 . .  .
dictionarybasedy = 2.6700000
functionbasedy = 2.6700000
Testing dictionary based y == function based y for loc3, 7, 1,  1.05000 . .  .
dictionarybasedy = 2.6700000
functionbasedy = 2.6700000
Testing dictionary based y == function based y for loc3, 7, 1,  1.05001 . .  .
dictionarybasedy = 3.5919550
functionbasedy = 3.5919550
Testing dictionary based y == function based y for loc3, 7, 1, 5000.00000 . .  .
dictionarybasedy = -0.9071000
functionbasedy = -0.9071000
Testing dictionary based y == function based y for loc3, 7, 2,  0.00000 . .  .
dictionarybasedy = 2.6700000
functionbasedy = 2.6700000
Testing dictionary based y == function based y for loc3, 7, 2,  1.05000 . .  .
dictionarybasedy = 2.6700000
functionbasedy = 2.6700000
Testing dictionary based y == function based y for loc3, 7, 2,  1.05000 . .  .
dictionarybasedy = 3.5919550
functionbasedy = 3.5919550
Testing dictionary based y == function based y for loc3, 7, 2, 5000.00000 . .  .
dictionarybasedy = -0.9071000
functionbasedy = -0.9071000
Testing dictionary based y == function based y for loc3, 8, 1,  0.00000 . .  .
dictionarybasedy = 2.6400000
functionbasedy = 2.6400000
Testing dictionary based y == function based y for loc3, 8, 1,  1.00000 . .  .
dictionarybasedy = 2.6400000
functionbasedy = 2.6400000
Testing dictionary based y == function based y for loc3, 8, 1,  1.00001 . .  .
dictionarybasedy = 3.3285000
functionbasedy = 3.3285000
Testing dictionary based y == function based y for loc3, 8, 1, 5000.00000 . .  .
dictionarybasedy = 0.3291000
functionbasedy = 0.3291000
Testing dictionary based y == function based y for loc3, 8, 2,  0.00000 . .  .
dictionarybasedy = 2.6400000
functionbasedy = 2.6400000
Testing dictionary based y == function based y for loc3, 8, 2,  1.00000 . .  .
dictionarybasedy = 2.6400000
functionbasedy = 2.6400000
Testing dictionary based y == function based y for loc3, 8, 2,  1.00001 . .  .
dictionarybasedy = 3.3285000
functionbasedy = 3.3285000
Testing dictionary based y == function based y for loc3, 8, 2, 5000.00000 . .  .
dictionarybasedy = 0.3291000
functionbasedy = 0.3291000
Testing dictionary based y == function based y for loc3, 9, 1,  0.00000 . .  .
dictionarybasedy = 2.4900000
functionbasedy = 2.4900000
Testing dictionary based y == function based y for loc3, 9, 1, 5000.00000 . .  .
dictionarybasedy = 2.4900000
functionbasedy = 2.4900000
Testing dictionary based y == function based y for loc3, 9, 2,  0.00000 . .  .
dictionarybasedy = 2.4900000
functionbasedy = 2.4900000
Testing dictionary based y == function based y for loc3, 9, 2, 5000.00000 . .  .
dictionarybasedy = 2.4900000
functionbasedy = 2.4900000
Testing dictionary based y == function based y for loc1, 1, 1,  0.00000 . .  .
dictionarybasedy = 2.3500000
functionbasedy = 2.3500000
Testing dictionary based y == function based y for loc1, 1, 1, 5000.00000 . .  .
dictionarybasedy = 2.3500000
functionbasedy = 2.3500000
Testing dictionary based y == function based y for loc1, 1, 2,  0.00000 . .  .
dictionarybasedy = 2.4500000
functionbasedy = 2.4500000
Testing dictionary based y == function based y for loc1, 1, 2, 5000.00000 . .  .
dictionarybasedy = 2.4500000
functionbasedy = 2.4500000
Returning default Y for  loc1, 99,  1,  0.00000 . .  .
Testing dictionary based y == function based y for loc1, 99, 1,  0.00000 . .  .
dictionarybasedy = 2.5000000
functionbasedy = 2.5000000
Returning default Y for  loc1, 99,  1, 5000.00000 . .  .
Testing dictionary based y == function based y for loc1, 99, 1, 5000.00000 . .  .
dictionarybasedy = 2.5000000
functionbasedy = 2.5000000
Returning default Y for  loc1, 99,  2,  0.00000 . .  .
Testing dictionary based y == function based y for loc1, 99, 2,  0.00000 . .  .
dictionarybasedy = 2.5000000
functionbasedy = 2.5000000
Returning default Y for  loc1, 99,  2, 5000.00000 . .  .
Testing dictionary based y == function based y for loc1, 99, 2, 5000.00000 . .  .
dictionarybasedy = 2.5000000
functionbasedy = 2.5000000
Testing dictionary based y == function based y for loc1, 4, 1,  1.29001 . .  .
dictionarybasedy = 6.7215720
functionbasedy = 6.7215720
Testing dictionary based y == function based y for loc1, 4, 1, 5000.00000 . .  .
dictionarybasedy = -9.2743000
functionbasedy = -9.2743000
Testing dictionary based y == function based y for loc1, 4, 1,  0.00000 . .  .
dictionarybasedy = 2.6000000
functionbasedy = 2.6000000
Testing dictionary based y == function based y for loc1, 4, 1,  1.29000 . .  .
dictionarybasedy = 2.6000000
functionbasedy = 2.6000000
Testing dictionary based y == function based y for loc1, 4, 2,  0.00000 . .  .
dictionarybasedy = 2.8215000
functionbasedy = 2.8215000
Testing dictionary based y == function based y for loc1, 4, 2, 5000.00000 . .  .
dictionarybasedy = 1.8215000
functionbasedy = 1.8215000
Testing dictionary based y == function based y for loc1, 5, 1,  1.25001 . .  .
dictionarybasedy = 5.7119500
functionbasedy = 5.7119500
Testing dictionary based y == function based y for loc1, 5, 1, 5000.00000 . .  .
dictionarybasedy = -7.2848000
functionbasedy = -7.2848000
Testing dictionary based y == function based y for loc1, 5, 1,  0.00000 . .  .
dictionarybasedy = 2.4700000
functionbasedy = 2.4700000
Testing dictionary based y == function based y for loc1, 5, 1,  1.25000 . .  .
dictionarybasedy = 2.4700000
functionbasedy = 2.4700000
Testing dictionary based y == function based y for loc1, 5, 2,  0.00000 . .  .
dictionarybasedy = 2.8733000
functionbasedy = 2.8733000
Testing dictionary based y == function based y for loc1, 5, 2, 5000.00000 . .  .
dictionarybasedy = 1.3733000
functionbasedy = 1.3733000
Testing dictionary based y == function based y for loc1, 6, 1,  0.00000 . .  .
dictionarybasedy = 2.5700000
functionbasedy = 2.5700000
Testing dictionary based y == function based y for loc1, 6, 1,  1.31000 . .  .
dictionarybasedy = 2.5700000
functionbasedy = 2.5700000
Testing dictionary based y == function based y for loc1, 6, 1,  1.31001 . .  .
dictionarybasedy = 4.9283420
functionbasedy = 4.9283420
Testing dictionary based y == function based y for loc1, 6, 1, 5000.00000 . .  .
dictionarybasedy = -4.0693000
functionbasedy = -4.0693000
Testing dictionary based y == function based y for loc1, 6, 2,  0.00000 . .  .
dictionarybasedy = 3.0612000
functionbasedy = 3.0612000
Testing dictionary based y == function based y for loc1, 6, 2, 5000.00000 . .  .
dictionarybasedy = 1.0612000
functionbasedy = 1.0612000
Testing dictionary based y == function based y for loc1, 7, 1,  1.31501 . .  .
dictionarybasedy = 6.5440550
functionbasedy = 6.5440550
Testing dictionary based y == function based y for loc1, 7, 1, 5000.00000 . .  .
dictionarybasedy = -8.4520000
functionbasedy = -8.4520000
Testing dictionary based y == function based y for loc1, 7, 1,  0.00000 . .  .
dictionarybasedy = 2.6000000
functionbasedy = 2.6000000
Testing dictionary based y == function based y for loc1, 7, 1,  1.31500 . .  .
dictionarybasedy = 2.6000000
functionbasedy = 2.6000000
Testing dictionary based y == function based y for loc1, 7, 2,  0.00000 . .  .
dictionarybasedy = 2.9580000
functionbasedy = 2.9580000
Testing dictionary based y == function based y for loc1, 7, 2, 5000.00000 . .  .
dictionarybasedy = -12.5420000
functionbasedy = -12.5420000
Testing dictionary based y == function based y for loc1, 8, 1,  0.00000 . .  .
dictionarybasedy = 2.6000000
functionbasedy = 2.6000000
Testing dictionary based y == function based y for loc1, 8, 1, 5000.00000 . .  .
dictionarybasedy = 2.6000000
functionbasedy = 2.6000000
Testing dictionary based y == function based y for loc1, 8, 2,  0.00000 . .  .
dictionarybasedy = 2.6000000
functionbasedy = 2.6000000
Testing dictionary based y == function based y for loc1, 8, 2, 5000.00000 . .  .
dictionarybasedy = 2.6000000
functionbasedy = 2.6000000
Testing dictionary based y == function based y for loc1, 9, 1,  0.00000 . .  .
dictionarybasedy = 2.5300000
functionbasedy = 2.5300000
Testing dictionary based y == function based y for loc1, 9, 1,  1.27500 . .  .
dictionarybasedy = 2.5300000
functionbasedy = 2.5300000
Testing dictionary based y == function based y for loc1, 9, 1,  1.27501 . .  .
dictionarybasedy = 6.4777175
functionbasedy = 6.4777175
Testing dictionary based y == function based y for loc1, 9, 1, 5000.00000 . .  .
dictionarybasedy = 4.9781000
functionbasedy = 4.9781000
Testing dictionary based y == function based y for loc1, 9, 2,  0.00000 . .  .
dictionarybasedy = 2.5400000
functionbasedy = 2.5400000
Testing dictionary based y == function based y for loc1, 9, 2, 5000.00000 . .  .
dictionarybasedy = 2.5400000
functionbasedy = 2.5400000
Testing dictionary based y == function based y for loc4, 1, 1,  0.00000 . .  .
dictionarybasedy = 2.7802000
functionbasedy = 2.7802000
Testing dictionary based y == function based y for loc4, 1, 1, 5000.00000 . .  .
dictionarybasedy = 1.2802000
functionbasedy = 1.2802000
Testing dictionary based y == function based y for loc4, 1, 2,  0.00000 . .  .
dictionarybasedy = 2.7802000
functionbasedy = 2.7802000
Testing dictionary based y == function based y for loc4, 1, 2, 5000.00000 . .  .
dictionarybasedy = 1.2802000
functionbasedy = 1.2802000
Returning default Y for  loc4, 99,  1,  0.00000 . .  .
Testing dictionary based y == function based y for loc4, 99, 1,  0.00000 . .  .
dictionarybasedy = 2.5000000
functionbasedy = 2.5000000
Returning default Y for  loc4, 99,  1, 5000.00000 . .  .
Testing dictionary based y == function based y for loc4, 99, 1, 5000.00000 . .  .
dictionarybasedy = 2.5000000
functionbasedy = 2.5000000
Returning default Y for  loc4, 99,  2,  0.00000 . .  .
Testing dictionary based y == function based y for loc4, 99, 2,  0.00000 . .  .
dictionarybasedy = 2.5000000
functionbasedy = 2.5000000
Returning default Y for  loc4, 99,  2, 5000.00000 . .  .
Testing dictionary based y == function based y for loc4, 99, 2, 5000.00000 . .  .
dictionarybasedy = 2.5000000
functionbasedy = 2.5000000
Testing dictionary based y == function based y for loc4, 4, 1,  0.00000 . .  .
dictionarybasedy = 2.6000000
functionbasedy = 2.6000000
Testing dictionary based y == function based y for loc4, 4, 1,  1.00000 . .  .
dictionarybasedy = 2.6000000
functionbasedy = 2.6000000
Testing dictionary based y == function based y for loc4, 4, 1,  1.00001 . .  .
dictionarybasedy = 4.5528000
functionbasedy = 4.5528000
Testing dictionary based y == function based y for loc4, 4, 1, 5000.00000 . .  .
dictionarybasedy = -5.4452000
functionbasedy = -5.4452000
Testing dictionary based y == function based y for loc4, 4, 2,  0.00000 . .  .
dictionarybasedy = 2.6500000
functionbasedy = 2.6500000
Testing dictionary based y == function based y for loc4, 4, 2, 1125.00000 . .  .
dictionarybasedy = 2.6500000
functionbasedy = 2.6500000
Testing dictionary based y == function based y for loc4, 4, 2, 1125.01000 . .  .
dictionarybasedy = 2.6808890
functionbasedy = 2.6808890
Testing dictionary based y == function based y for loc4, 4, 2, 5000.00000 . .  .
dictionarybasedy = -1.5816000
functionbasedy = -1.5816000
Testing dictionary based y == function based y for loc4, 5, 1,  0.00000 . .  .
dictionarybasedy = 4.2758000
functionbasedy = 4.2758000
Testing dictionary based y == function based y for loc4, 5, 1, 5000.00000 . .  .
dictionarybasedy = -4.7242000
functionbasedy = -4.7242000
Testing dictionary based y == function based y for loc4, 5, 2,  0.00000 . .  .
dictionarybasedy = 2.6535000
functionbasedy = 2.6535000
Testing dictionary based y == function based y for loc4, 5, 2, 5000.00000 . .  .
dictionarybasedy = 2.1535000
functionbasedy = 2.1535000
Testing dictionary based y == function based y for loc4, 6, 1,  0.00000 . .  .
dictionarybasedy = 2.5800000
functionbasedy = 2.5800000
Testing dictionary based y == function based y for loc4, 6, 1,  1.10000 . .  .
dictionarybasedy = 2.5800000
functionbasedy = 2.5800000
Testing dictionary based y == function based y for loc4, 6, 1,  1.10001 . .  .
dictionarybasedy = 4.0307700
functionbasedy = 4.0307700
Testing dictionary based y == function based y for loc4, 6, 1, 5000.00000 . .  .
dictionarybasedy = -2.4678000
functionbasedy = -2.4678000
Testing dictionary based y == function based y for loc4, 6, 2,  0.00000 . .  .
dictionarybasedy = 2.8081000
functionbasedy = 2.8081000
Testing dictionary based y == function based y for loc4, 6, 2, 5000.00000 . .  .
dictionarybasedy = 1.8081000
functionbasedy = 1.8081000
Testing dictionary based y == function based y for loc4, 7, 1,  0.00000 . .  .
dictionarybasedy = 2.6000000
functionbasedy = 2.6000000
Testing dictionary based y == function based y for loc4, 7, 1,  1.00000 . .  .
dictionarybasedy = 2.6000000
functionbasedy = 2.6000000
Testing dictionary based y == function based y for loc4, 7, 1,  1.00001 . .  .
dictionarybasedy = 4.3884000
functionbasedy = 4.3884000
Testing dictionary based y == function based y for loc4, 7, 1, 5000.00000 . .  .
dictionarybasedy = -4.6098000
functionbasedy = -4.6098000
Testing dictionary based y == function based y for loc4, 7, 2,  0.00000 . .  .
dictionarybasedy = 2.7334000
functionbasedy = 2.7334000
Testing dictionary based y == function based y for loc4, 7, 2, 5000.00000 . .  .
dictionarybasedy = 2.2834000
functionbasedy = 2.2834000

And that's it.  It bails on an AssertionError - I like to fix problems one at a time.  It took me about six runs to get everything matched.


Thank you for stopping by.
Categories: FLOSS Project Planets

Atelier now have a Logo! =D

Planet KDE - Wed, 2017-02-15 20:21

Yay! Now we have a logo! What do you think about it? Atelier is making around six months of development, and now is time to give you some updates. AtCore is on it's way to becoming stable, and I'm working on Atelier interface, so we can connect to AtCore and do some magic to everything [...]


Categories: FLOSS Project Planets

Drupal CMS Guides at Daymuse Studios: Product Display: Contextual Field Output in Drupal Commerce

Planet Drupal - Wed, 2017-02-15 19:42

Learn how to contextually output field data in Drupal Commerce with the Product Display tool. YouTube Video Tutorial included for e-commerce projects.

Categories: FLOSS Project Planets

Semaphore Community: Testing Python Applications with Pytest

Planet Python - Wed, 2017-02-15 18:58

This article is brought with ❤ to you by Semaphore.

Introduction

Testing applications has become a standard skill set required for any competent developer today. The Python community embraces testing, and even the Python standard library has good inbuilt tools to support testing. In the larger Python ecosystem, there are a lot of testing tools. Pytest stands out among them due to its ease of use and its ability to handle increasingly complex testing needs.

This tutorial will demonstrate how to write tests for Python code with pytest, and how to utilize it to cater for a wide range of testing scenarios.

Prerequisites

This tutorial uses Python 3, and we will be working inside a virtualenv.
Fortunately for us, Python 3 has inbuilt support for creating virtual environments.
To create and activate a virtual environment for this project, let's run the following commands:

mkdir pytest_project cd pytest_project python3 -m venv pytest-env

This creates a virtual environment called pytest-env in our working directory.

To begin using the virtualenv, we need to activate it as follows:

source pytest-env/bin/activate

As long as the virtualenv is active, any packages we install will be installed in our virtual environment, rather than in the global Python installation.

To get started, let's install pytest in our virtualenv.

pip install pytest Basic Pytest Usage

We will start with a simple test. Pytest expects our tests to be located in files whose names begin with test_ or end with _test.py. Let's create a file called test_capitalize.py, and inside it we will write a function called capital_case which should take a string as its argument, and should return a capitalized version of the string. We will also write a test, test_capital_case to ensure that the function does what it says. We prefix our test function names with test_, since this is what pytest expects our test functions to be named.

# test_capitalize.py def capital_case(x): return x.capitalize() def test_capital_case(): assert capital_case('semaphore') == 'Semaphore'

The immediately noticeable thing is that pytest uses a plain assert statement, which is much easier to remember and use compared to the numerous assertSomething functions found in unittest.

To run the test, execute the pytest command:

pytest

We should see that our first test passes.

A keen reader will notice that our function could lead to a bug. It does not check the type of the argument to ensure that it is a string. Therefore, if we passed in a number as the argument to the function, it would raise an exception.

We would like to handle this case in our function by raising a custom exception with a friendly error message to the user.

Let's try to capture this in our test:

# test_capitalize.py import pytest def test_capital_case(): assert capital_case('semaphore') == 'Semaphore' def test_raises_exception_on_non_string_arguments(): with pytest.raises(TypeError): capital_case(9)

The major addition here is the pytest.raises helper, which asserts that our function should raise a TypeError in case the argument passed is not a string.

Running the tests at this point should fail with the following error:

def capital_case(x): > return x.capitalize() E AttributeError: 'int' object has no attribute 'capitalize'

Since we've verified that we have not handled such a case, we can go ahead and fix it.

In our capital_case function, we should check that the argument passed is a string or a string subclass before calling the capitalize function. If it is not, we should raise a TypeError with a custom error message.

# test_capitalize.py def capital_case(x): if not isinstance(x, str): raise TypeError('Please provide a string argument') return x.capitalize()

When we rerun our tests, they should be passing once again.

Using Pytest Fixtures

In the following sections, we will explore some more advanced pytest features. To do this, we will need a small project to work with.

We will be writing a wallet application that enables its users to add or spend money in the wallet. It will be modeled as a class with two instance methods: spend_cash and add_cash.

We'll get started by writing our tests first. Create a file called test_wallet.py in the working directory, and add the following contents:

# test_wallet.py import pytest from wallet import Wallet, InsufficientAmount def test_default_initial_amount(): wallet = Wallet() assert wallet.balance == 0 def test_setting_initial_amount(): wallet = Wallet(100) assert wallet.balance == 100 def test_wallet_add_cash(): wallet = Wallet(10) wallet.add_cash(90) assert wallet.balance == 100 def test_wallet_spend_cash(): wallet = Wallet(20) wallet.spend_cash(10) assert wallet.balance == 10 def test_wallet_spend_cash_raises_exception_on_insufficient_amount(): wallet = Wallet() with pytest.raises(InsufficientAmount): wallet.spend_cash(100)

First things first, we import the Wallet class and the InsufficientAmount exception that we expect to raise when the user tries to spend more cash than they have in their wallet.

When we initialize the Wallet class, we expect it to have a default balance of 0. However, when we initialize the class with a value, that value should be set as the wallet's initial balance.

Moving on to the methods we plan to implement, we test that the add_cash method correctly increments the balance with the added amount. On the other hand, we are also ensuring that the spend_cash method reduces the balance by the spent amount, and that we can't spend more cash than we have in the wallet. If we try to do so, an InsufficientAmount exception should be raised.

Running the tests at this point should fail, since we have not created our Wallet class yet. We'll proceed with creating it. Create a file called wallet.py, and we will add our Wallet implementation in it. The file should look as follows:

# wallet.py class InsufficientAmount(Exception): pass class Wallet(object): def __init__(self, initial_amount=0): self.balance = initial_amount def spend_cash(self, amount): if self.balance < amount: raise InsufficientAmount('Not enough available to spend {}'.format(amount)) self.balance -= amount def add_cash(self, amount): self.balance += amount

First of all, we define our custom exception, InsufficientAmount, which will be raised when we try to spend more money than we have in the wallet. The Wallet class then follows. The constructor accepts an initial amount, which defaults to 0 if not provided. The initial amount is then set as the balance.

In the spend_cash method, we first check that we have a sufficient balance. If the balance is lower than the amount we intend to spend, we raise the InsufficientAmount exception with a friendly error message.

The implementation of add_cash then follows, which simply adds the provided amount to the current wallet balance.

Once we have this in place, we can rerun our tests, and they should be passing.

pytest -q test_wallet.py ..... 5 passed in 0.01 seconds Refactoring our Tests with Fixtures

You may have noticed some repetition in the way we initialized the class in each test. This is where pytest fixtures come in. They help us set up some helper code that should run before any tests are executed, and are perfect for setting up resources that are needed by the tests.

Fixture functions are created by marking them with the @pytest.fixture decorator. Test functions that require fixtures should accept them as arguments. For example, for a test to receive a fixture called wallet, it should have an argument with the fixture name, i.e. wallet.

Let's see how this works in practice. We will refactor our previous tests to use test fixtures where appropriate.

# test_wallet.py import pytest from wallet import Wallet, InsufficientAmount @pytest.fixture def empty_wallet(): '''Returns a Wallet instance with a zero balance''' return Wallet() @pytest.fixture def wallet(): '''Returns a Wallet instance with a balance of 20''' return Wallet(20) def test_default_initial_amount(empty_wallet): assert empty_wallet.balance == 0 def test_setting_initial_amount(wallet): assert wallet.balance == 20 def test_wallet_add_cash(wallet): wallet.add_cash(80) assert wallet.balance == 100 def test_wallet_spend_cash(wallet): wallet.spend_cash(10) assert wallet.balance == 10 def test_wallet_spend_cash_raises_exception_on_insufficient_amount(empty_wallet): with pytest.raises(InsufficientAmount): empty_wallet.spend_cash(100)

In our refactored tests, we can see that we have reduced the amount of boilerplate code by making use of fixtures.

We define two fixture functions,wallet and empty_wallet, which will be responsible for initializing the Wallet class in tests where it is needed, with different values.

For the first test function, we make use of the empty_wallet fixture, which provided a wallet instance with a balance of 0 to the test.
The next three tests receive a wallet instance initialized with a balance of 20. Finally, the last test receives the empty_wallet fixture. The tests can then make use of the fixture as if it was created inside the test function, as in the tests we had before.

Rerun the tests to confirm that everything works.

Utilizing fixtures helps us de-duplicate our code. If you notice a case where a piece of code is used repeatedly in a number of tests, that might be a good candidate to use as a fixture.

Some Pointers on Test Fixtures

Here are some pointers on using test fixtures:

  • Each test is provided with a newly-initialized Wallet instance, and not one that has been used in another test.

  • It is a good practice to add docstrings for your fixtures. To see all the available fixtures, run the following command:

pytest --fixtures

This lists out some inbuilt pytest fixtures, as well as our custom fixtures. The docstrings will appear as the descriptions of the fixtures.

wallet Returns a Wallet instance with a balance of 20 empty_wallet Returns a Wallet instance with a zero balance Parametrized Test Functions

Having tested the individual methods in the Wallet class, the next step we should take is to test various combinations of these methods. This is to answer questions such as "If I have an initial balance of 30, and spend 20, then add 100, and later on spend 50, how much should the balance be?"

As you can imagine, writing out those steps in the tests would be tedious, and pytest provides quite a delightful solution: Parametrized test functions

To capture a scenario like the one above, we can write a test:

# test_wallet.py @pytest.mark.parametrize("earned,spent,expected", [ (30, 10, 20), (20, 2, 18), ]) def test_transactions(earned, spent, expected): my_wallet = Wallet() my_wallet.add_cash(earned) my_wallet.spend_cash(spent) assert my_wallet.balance == expected

This enables us to test different scenarios, all in one function. We make use of the @pytest.mark.parametrize decorator, where we can specify the names of the arguments that will be passed to the test function, and a list of arguments corresponding to the names.

The test function marked with the decorator will then be run once for each set of parameters.

For example, the test will be run the first time with the earned parameter set to 30, spent set to 10, and expected set to 20. The second time the test is run, the parameters will take the second set of arguments. We can then use these parameters in our test function.

This elegantly helps us capture the scenario:

  • My wallet initially has 0,
  • I add 30 units of cash to the wallet,
  • I spend 10 units of cash, and
  • I should have 20 units of cash remaining after the two transactions.

This is quite a succinct way to test different combinations of values without writing a lot of repeated code.

Combining Test Fixtures and Parametrized Test Functions

To make our tests less repetitive, we can go further and combine test fixtures and parametrize test functions. To demonstrate this, let's replace the wallet initialization code with a test fixture as we did before. The end result will be:

# test_wallet.py @pytest.fixture def my_wallet(): '''Returns a Wallet instance with a zero balance''' return Wallet() @pytest.mark.parametrize("earned,spent,expected", [ (30, 10, 20), (20, 2, 18), ]) def test_transactions(my_wallet, earned, spent, expected): my_wallet.add_cash(earned) my_wallet.spend_cash(spent) assert my_wallet.balance == expected

We will create a new fixture called my_wallet that is exactly the same as the empty_wallet fixture we used before. It returns a wallet instance with a balance of 0. To use both the fixture and the parametrized functions in the test, we include the fixture as the first argument, and the parameters as the rest of the arguments.

The transactions will then be performed on the wallet instance provided by the fixture.

You can try out this pattern further, e.g. with the wallet instance with a non-empty balance and with other different combinations of the earned and spent amounts.

Continuous Testing on Semaphore CI

Next, let's add continuous testing to our application using SemaphoreCI to ensure that we don't break our code when we make new changes.

Make sure you've committed everything on Git, and push your repository to GitHub or Bitbucket, which will enable Semaphore to fetch your code. Next, sign up for a free Semaphore account, if you don't have one already. Once you've confirmed your email, it's time to create a new project.

Follow these steps to add the project to Semaphore:

  1. Once you're logged into Semaphore, navigate to your list of projects and click the "Add New Project" button:

  2. Next, select the account where you wish to add the new project.

  3. Select the repository that holds the code you'd like to build:

  4. Select the branch you would like to build. The master branch is the default.

  5. Configure your project as shown below:

  6. Once your build has run, you should see a successful build that should look something like this:

In a few simple steps, we've set up continuous testing.

Summary

We hope that this article has given you a solid introduction to pytest, which is one of the most popular testing tools in the Python ecosystem. It's extremely easy to get started with using it, and it can handle most of what you need from a testing tool.

You can check out the complete code on GitHub.

Please reach out with any questions or feedback you may have in the comments section below.

This article is brought with ❤ to you by Semaphore.

Categories: FLOSS Project Planets

Tarek Ziade: Molotov, simple load testing

Planet Python - Wed, 2017-02-15 18:00

I don't know why, but I am a bit obsessed with load testing tools. I've tried dozens, I built or been involved in the creation of over ten of them in the past 15 years. I am talking about load testing HTTP services with a simple HTTP client.

Three years ago I built Loads at Mozilla, which is still being used to load test our services - and it's still evolving. It was based on a few fundamental principles:

  1. A Load test is an integration test that's executed many times in parallel against a server.
  2. Ideally, load tests should be built with vanilla Python and a simple HTTP client. There's no mandatory reason we have to rely on a Load Test class or things like this - the lighter the load test framework is, the better.
  3. You should be able to run a load test from your laptop without having to deploy a complicated stack, like a load testing server and clients, etc. Because when you start building a load test against an API, step #1 is to start with small loads from one box - not going nuclear from AWS on day 1.
  4. Doing a massively distributed load test should not happen & be driven from your laptop. Your load test is one brick and orchestrating a distributed load test is a problem that should be entirely solved by another software that runs in the cloud on its own.

Since Loads was built, two major things happened in our little technical word:

  • Docker is everywhere
  • Python 3.5 & asyncio, yay!

Python 3.5+ & asyncio just means that unlike my previous attempts at building a tool that would generate as many concurrent requests as possible, I don't have to worry anymore about key principle #2: we can do async code now in vanilla Python, and I don't have to force ad-hoc async frameworks on people.

Docker means that for running a distributed test, a load test that runs from one box can be embedded inside a Docker image, and then a tool can orchestrate a distributed test that runs and manages Docker images in the cloud.

That's what we've built with Loads: "give me a Docker image of something that's performing a small load test against a server, and I shall run it in hundreds of box." This Docker-based design was a very elegant evolution of Loads thanks to Ben Bangert who had that idea. Asking for people to embed their load test inside a Docker image also means that they can use whatever tool they want as long as it performs HTTP calls on the server to stress, and optionally send some info via statsd.

But proposing a helpful, standard tool to build the load test script that will be embedded in Docker is still something we want to suggest. And frankly, 90% of the load tests happen from a single box. Going nuclear is not happening that often.

Introducing Molotov

Molotov is a new tool I've been working on for the past few months - it's based on asyncio, aiohttp and tries to be as light as possible.

Molotov scripts are coroutines to perform HTTP calls, and spawning a lot of them in a few processes can generate a fair amount of load from a single box.

Thanks to Richard, Chris, Matthew and others - my Mozilla QA teammates, I had some great feedback to create the tool, and I think it's almost ready for being used by more folks - it stills need to mature, and the docs to improve but the design is settled, and it works well already.

I've pushed a release at PyPI and plan to push a first stable final release this month once the test coverage is looking better & the docs are polished.

But I think it's ready for a bit of community feedback. That's why I am blogging about it today -- if you want to try it, help building it here are a few links:

Try it with the console mode (-c), try to see if it fits your brain and let us know what you think.

Categories: FLOSS Project Planets

S. Lott: Intro to Python CSV Processing for Actual Beginners

Planet Python - Wed, 2017-02-15 15:37
I've written a lot about CSV processing. Here are some examples http://slott-softwarearchitect.blogspot.com/search/label/csv.

It crops up in my books. A lot.

In all cases, though, I make the implicit assumption that my readers already know a lot of Python. This is a disservice to anyone who's getting started.
Getting StartedYou'll need Python 3.6. Nothing else will do if you're starting out.

Go to https://www.continuum.io/downloads and get Python 3.6. You can get the small "miniconda" version to start with. It has some of what you'll need to hack around with CSV files. The full Anaconda version contains a mountain of cool stuff, but it's a big download.

Once you have Python installed, what next? To be sure things are running do this:
  1. Find a command line prompt (terminal window, cmd.exe, whatever it's called on your OS.)
  2. Enter python3.6 (or just python in Windows.)
  3. If Anaconda installed everything properly, you'll have an interaction that looks like this:

MacBookPro-SLott:Python2v3 slott$ python3.5Python 3.5.1 (v3.5.1:37a07cee5969, Dec  5 2015, 21:12:44) [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwinType "help", "copyright", "credits" or "license" for more information.>>> 
More-or-less. (Yes, the example shows 3.5.1 even though I said you should get 3.6. As soon as the Lynda.com course drops, I'll upgrade. The differences between 3.5 and 3.6 are almost invisible.)
Here's your first interaction.
>>> 355/1133.1415929203539825
Yep. Python did math. Stuff is happening.
Here's some more.
>>> exitUse exit() or Ctrl-D (i.e. EOF) to exit>>> exit()
Okay. That was fun. But it's not data wrangling. When do we get to the good stuff?To Script or Not To ScriptWe have two paths when it comes to scripting. You can write script files and run them. This is pretty normal application development stuff. It works well. 
Or.
You can use a Jupyter Notebook. This isn't exactly a script. But. You can use it like a script. It's a good place to start building some code that's useful. You can rerun some (or all) of the notebook to make it script-like.

If you downloaded Anaconda, you have Jupyter. Done. Skip over the next part on installing Jupyter.
Installing JupyterIf you did not download the full Anaconda -- perhaps because you used the miniconda -- you'll need to add Jupyter.  You can use the command conda install jupyter for this.

Another choice is to use the PIP program to install jupyter. The net effect is the same. It starts like this


MacBookPro-SLott:Python2v3 slott$ pip3 install jupyterCollecting jupyter  Downloading jupyter-1.0.0-py2.py3-none-any.whlCollecting ipykernel (from jupyter)  Downloading ipykernel-4.5.2-py2.py3-none-any.whl (98kB)
    100% |████████████████████████████████| 102kB 1.3MB/s 
It ends like this.

  Downloading pyparsing-2.1.10-py2.py3-none-any.whl (56kB)    100% |████████████████████████████████| 61kB 2.1MB/s Installing collected packages: ipython-genutils, decorator, traitlets, appnope, appdirs, pyparsing, packaging, setuptools, ptyprocess, pexpect, simplegeneric, wcwidth, prompt-toolkit, pickleshare, ipython, jupyter-core, pyzmq, jupyter-client, tornado, ipykernel, qtconsole, terminado, nbformat, entrypoints, mistune, pandocfilters, testpath, bleach, nbconvert, notebook, widgetsnbextension, ipywidgets, jupyter-console, jupyter  Found existing installation: setuptools 18.2    Uninstalling setuptools-18.2:      Successfully uninstalled setuptools-18.2  Running setup.py install for simplegeneric ... done  Running setup.py install for tornado ... done  Running setup.py install for terminado ... done  Running setup.py install for pandocfilters ... doneSuccessfully installed appdirs-1.4.0 appnope-0.1.0 bleach-1.5.0 decorator-4.0.11 entrypoints-0.2.2 ipykernel-4.5.2 ipython-5.2.2 ipython-genutils-0.1.0 ipywidgets-5.2.2 jupyter-1.0.0 jupyter-client-4.4.0 jupyter-console-5.1.0 jupyter-core-4.2.1 mistune-0.7.3 nbconvert-5.1.1 nbformat-4.2.0 notebook-4.4.1 packaging-16.8 pandocfilters-1.4.1 pexpect-4.2.1 pickleshare-0.7.4 prompt-toolkit-1.0.13 ptyprocess-0.5.1 pyparsing-2.1.10 pyzmq-16.0.2 qtconsole-4.2.1 setuptools-34.1.1 simplegeneric-0.8.1 terminado-0.6 testpath-0.3 tornado-4.4.2 traitlets-4.3.1 wcwidth-0.1.7 widgetsnbextension-1.2.6


Now you have Jupyter.
What just happened? You installed a large number of Python packages. All of those packages were required to run Jupyter. You can see jupyter-1.0.0 hidden in the list of packages that were installed.Starting JupyterThe Jupyter tool does a number of things. We're going to use the notebook feature to save some code that we can rerun. We can also save notes and do other things in the notebook. When you start the notebook, two things will happen.
  1. The terminal window will start displaying the Jupyter console log.
  2. A browser will pop open showing the local Jupyter notebook home page.
Here's what the console log looks like:
MacBookPro-SLott:Python2v3 slott$ jupyter notebook[I 08:51:56.746 NotebookApp] Writing notebook server cookie secret to /Users/slott/Library/Jupyter/runtime/notebook_cookie_secret[I 08:51:56.778 NotebookApp] Serving notebooks from local directory: /Users/slott/Documents/Writing/Python/Python2v3[I 08:51:56.778 NotebookApp] 0 active kernels [I 08:51:56.778 NotebookApp] The Jupyter Notebook is running at: http://localhost:8888/?token=2eb40fbb96d7788dd05a49600b1fca4e07cd9c8fe931f9af[I 08:51:56.778 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
You can glance at it to see that things are still working. The "Use Control-C to stop this server" is a reminder of how to stop things when you're done.
Your Jupyter home page will have this logo in the corner. Things are working.

You can pick files from this list and edit them. And -- important for what we're going to do -- you can create new notebooks.
On the right side of the web page, you'll see this:

You can create files and folders. That's cool. You can create an interactive terminal session. That's also cool. More important, though, is that you can create a new Python 3 notebook. That's were we'll wrangle with CSV files.

"But Wait," you say. "What directory is it using for this?"

The jupyter server is using the current working directory when you started it.

If you don't like this choice, you have two alternatives.
  • Stop Jupyter. Change directory to your preferred place to keep files. Restart Jupyter.
  • Stop Jupyter. Include the --notebook-dir=your_working_directory option.
The second choice looks like this:
MacBookPro-SLott:Python2v3 slott$ jupyter notebook --notebook-dir=~/Documents/Writing/Python[I 11:15:42.964 NotebookApp] Serving notebooks from local directory: /Users/slott/Documents/Writing/Python
Now you know where your files are going to be. You can make sure that your .CSV files are here. You will have your ".ipynb" files here also. Lots of goodness in the right place.Using JupyterHere's what a notebook looks like. Here's a screen shot.

First. The notebook was originally called "untitled" which seemed less than ideal. So I clicked on the name and changed it to "csv_wrestling".

Second. There was a box labeled In [ ]:. I entered some Python code to the right of this label. Then I clicked the run cell icon. (It's similar to this emoji --  ⏯ -- but not exactly.)

The In [ ]: changed to In [1]:. A second box appeared labeled Out [1]:. This annotates our dialog with Python: each input and Python's response is tracked. It's pretty nice. We can change our input and rerun the cell. We can add new cells with different things to run. We can run all of the cells. Lots of things are possible based on this idea of a cell with our command. When we run a cell, Python processes the command and we see the output.

For many expressions, a value is displayed.  For some expressions, however, nothing is displayed. For complete statements, nothing is displayed. This means we'll often have to throw the name of a variable in to see the value of that variable.


The rest of the notebook is published separately. It's awkward to work in Blogger when describing a Jupyter notebook. It's much easier to simply post the notebook in GitHub.

The notebook is published here: slott56/introduction-python-csv. You can follow the notebook to build your own copy which reads and writes CSV files.


Categories: FLOSS Project Planets

Drupal Association blog: DrupalCon Vienna Program Changes

Planet Drupal - Wed, 2017-02-15 11:59

It is our goal at The Drupal Association to be sustainable, so we can deliver on our mission for years to come. In 2016 we reorganized to achieve this goal, which moved us into healthier financial waters. However, we still have work to do.

After financial analysis, we can see that some programs sustain themselves and some do not. The good news is, most do. Unfortunately, DrupalCon Europe often does not. In the past, we’ve taken a loss because DrupalCon Europe is an important way in which we serve the community. Now, with renewed focus on our financial health, we need to rethink how we achieve this event in a more sustainable way.

To make sure we are balanced between serving our mission and making this event sustainable, The Drupal Association and Board created a DrupalCon Europe task force. After looking at the event financials we determined the best way to achieve notable expense savings was to eliminate our Monday programming and activities. This cut includes full day training sessions, the Community Summit and Business Summit, and one day of sprinting.

DrupalCon training and Business Summit registrations have noticeably declined in attendance in recent years, making them most reasonable to eliminate. The Business Summit cancellation is balanced through increasing value in business track programming, CEO dinner, and organically organized BOFs throughout the week.

Cancelling a day of sprints was a tough decision, however recent feedback shows that numerous sprints are a contributing factor to burn-out, and the eliminated sprint is just one of many we offer during DrupalCon. The sprint was a healthy choice to eliminate both financially and programming-wise.

While some of those individual activities generated revenue or directly supported mission work, the most direct way to achieve notable savings is to eliminate one full day of production: the cost for staff to prepare a venue, run heating and cooling, provide and service WiFi, and catering. Some of the traditional Monday activities will shift to later in the week, for instance, the opening night reception will move to Tuesday evening.

The other direct cost saving we can realize without eliminating programming, is to no longer offer free attendee t-shirts and physical tote bags. We know, the free t-shirts are awesome and this is not fun to read. The upside is that sponsors - who do an amazing job helping fund DrupalCon - always bring their t-shirt-best to the exhibit hall. We are evaluating if we can provide a collectible t-shirt for purchase as an alternative option. Additionally, we will be providing electronic tote bags with giveaways from sponsors.

We wanted to communicate the notable changes above as soon as possible. But some other, more minor details are still being ironed out. For instance…

We aren't approaching this puzzle from just an expense standpoint. We are also looking at plans to make the event more financially accessible for attendees. We are researching options like ticket discounts for qualified attendees and early bird discounts for Drupal Association members. These are not set in stone, just explorations at this point.

Beyond the financial accessibility of a DrupalCon ticket, we are looking to expand DrupalCon attendance by inviting new attendees. We need more contributors, a next generation of sprint mentors, on-site DrupalCon volunteers, and enthusiasts who go home and champion Drupal to their colleagues and peers. We will be launching support campaigns to introduce new audiences to DrupalCon, and equip our partners and sponsors with the tools and support to promote DrupalCon to their own circles.

This plan is still evolving and things may change. There may still be additional changes as we learn and see the impacts of the decisions we’ve already made. We're working hard to balance costs with benefits, and preserve the strong community experience that makes DrupalCon special. We will continue to communicate as we go - we know DrupalCon is as important to all of you, as it is to us.

Categories: FLOSS Project Planets

Acquia Developer Center Blog: 247: Diversity, Differentiation, Value(s) with Tim Deeson

Planet Drupal - Wed, 2017-02-15 11:38

While passing through London in late 2016, I sat down with Tim Deeson, lead at the Deeson agency. We talked about the history of his company, delivering value with Drupal is more than delivering code, and Tim's revelations and action regarding diversity at his company and in the tech industry.

Resources / Mentioned
  • White men in digital - our privilege is blinding - Deeson blog, September, 2016
  • A progress update on creating inclusive teams at Deeson - Deeson blog, January, 2017
  • The ten actions Tim and Deeson committed to in 2016 to improve diversity in their company:
    • Begin annual salary audits to check for bias and rectify imbalances
    • Report on our progress when we do our quarterly planning
    • Implicit bias training for everyone
    • Stop attending conferences that don’t have a credible Code of Conduct
    • During hiring, take a more nuanced view on whether a developer has made open source contributions
    • Stop participating in all male conference panels
    • Improve our Careers page, including clarity on parental leave
    • Stop asking for previous salary during hiring – it can perpetuate pay inequality
    • Create dialogue and feedback channels within the company to offer better support
    • Stay informed and signpost groups working in the industry
Conversation Video

[Full Conversation Transcript] The Deeson Origin Story

jam: Tim Deeson, you run what’s now called the Deeson Agency. Is that right?

Tim: Yes, that’s our social media name, but just “Deeson.”

jam: Deeson.

Tim: Yes.

jam: What’s the history of Deeson?

Tim: Deeson is a family business. It was started by my grandfather in the ‘50s, and was a contract publishing company. In 2001, we started the digital agency and that’s the main part of business, but we, actually, still have a small publishing company, too.

jam: You and I talked several years ago now on the podcast about the origins of the business and coming from print to digital and all of those things, but it’s really one of those stories about like, “Yes, I can build a website, Dad!”

Tim: Yes. I came back from backpacking and a family friend, when I lived in San Francisco, a family friend who worked for Apple, he taught me how to hack around with Macs and stuff. I kind of came from backpacking, needed some money. This was kind of in pre-CSS days, actually, I started making websites, kept picking up clients and kind of went from there. In about 2007, we started doing Drupal.

jam: Pre-CSS.

Tim: Yes. This was Adobe GoLive ... CSS was just starting to come out there.

jam: Do you still write your HTML in all caps?

Tim: Yes. I don’t write HTML anymore.

jam: Do you still use spacer gifs?

Tim: Yes, and I shrink a massive table and it’s mostly made of one-pixel gifs That’s the only way I know, unfortunately. CSS passed me by.

jam: They let you do the managing now.

Tim: Yes. More the spreadsheets and the blog posts, probably more where my talents lie, in reality.

Delivering Value with Drupal

jam: Can you talk about the difference between delivering code or delivering Drupal and delivering value higher up the value chain?

Tim: Yes. I guess, I always look at these things as kind of nested, you know, often clients don’t necessarily have a strategy that they work to. Looking – things like Drupal are tools that nested within a strategy, you can deliver value because they have certain attributes, but it’s starting, for me, to the top or the bottom to understand if we want to grow sales in Europe, for example. It’s, then, looking at ways to do digital channel to that and how – what’s a cost effective long-term way of delivering their sales reliably and where does that keep stacking up.

jam: “I want a website” or “I have a website” is no longer transformational, right?

Tim: No. Absolutely. We work really hard with clients to make sure that they understand what’s the business change. We always treat our projects and change management and transformational projects--no one actually wants a website. What they want is happier customers or better-informed surgeons. We’re always looking at the KPIs that will actually measure the business impact of the platform that we’re going to build rather than who should be biggest on a home page or is a carousel a good thing or a bad thing? Whenever we get into those debates, we know that we’ve lost sight of the some key goals that, for the business, are the things that really matter.

jam: The flipside of that coin is delivering websites, whether Drupal or whatever, is also not necessarily a differentiator anymore.

Move up the Chain

Tim: No. Certainly not. Digital agencies, generally, are fighting a kind of commoditization. We think about agency work as kind of, “Do it for me”, “Help me think” or “Think for me”.

“Do it for me”, is just do what exactly what you're told, and it’s very easy to make that kind of commoditized.

“Help me think”, starts to get into the UX and design side of the work. I want you to help me work out what the features are on the site.

“Think for me”, is the strategy. So, they’re coming to you with just a relatively short brief often of, “I need all of the the surgeons in Europe who do heart surgery to understand these training techniques. What are you going to do about it?” That’s where we, actually, can work with them on that kind of overall strategy which will have a strong digital element if they’ve come to us, but it’s an open brief, effectively.

jam: “Do it for me”, “Help me think”, or, “Think for me.” Okay.

Tim: We kind of look at that as a kind of commoditization curve, really, within the market where if you're just delivering exactly what you’ve been asked to deliver, then that’s very easy to niche or offshore, for example, and you're really competing on cost.

jam: Or Squarespace or Wix or wordpress.com, right?

Tim: Yes. And increasingly, those tools get more and more sophisticated that the client can actually deliver those solutions themselves. There’s not – you're not adding much value. You're not adding much intellectual value anymore, which means it’s, potentially, not going to be good for rates and retention.

jam: Right. So, not only do you have to deliver more value, but you have to differentiate, right?

Tim: Yes. Those things are often interlinked, but understanding what is the value that you really adding. Some of them is really hygiene value and reliable code, secure working platforms, but, increasingly, amongst the good agencies. There are not endless amounts, but there are certainly competitor agencies, it’s a significant kind of chunk that I would count as great agencies. How do you, then, differentiate amongst yourself, amongst those agencies, as well?

Diversity and Differentiation

jam: You and I have been talking a bit about one of the things that we might consider a differentiator. I was wondering if you could talk about your recent journey in the worlds that we’ll broadly define as diversity.

Tim: Yes. Probably about nine months ago, we were looking at DrupalCon Europe sponsorships, and it’s been an issue that has been in the back of my mind if do we really – are we representing the kind of communities that we’re part of? I had a kind of a niggle that probably things weren’t – it didn't feel right. Intuitively, I thought we’re probably not doing very well in this issue. It’s morally or ethically commercially initiated, it’s really important that we do improve. We came into DrupalCon, the sponsorship season, and we were looking at some sponsorships and Women in Drupal came up. “Yes, that sounds like – it would be really interesting and useful to support, really happy to do that.” I thought, “Okay. What does this actually mean? What’s the point of Women in Drupal as an event? I thought through that process of analysis. I started doing research where you start to reflect on how we performed as a company. It came to a realization that while we have the kind of positive, well intentioned kind of – “we’re not actively doing anything harmful” like kind of really quickly realized, I guess, we weren’t also doing anything proactively, constructively to kind of address what issues in the industry and certain issues that I can see much closer to home and within the company, too. We had a leadership team that the top four roles, with women – I need one in the top 10 of our roles was a woman. To me that doesn’t really sound like great gender balance if half the population are women, statistically, we kind of seem to be – have quite a quirk in there.

jam: Gender, of course, is not the only axis of diversity that you need to look at, right?

Tim: Yes. We were looking across the board, I guess, around race, sexuality ... There’s quite a few different characteristics that we realize it would be ... Society is at large, kind of has, historically, not performed well in terms of creating equal opportunities for people ... discrimination.

jam: Age is another one that, I think, is especially important. I've seen some organizations, large corporate organizations, get to that tough point economically, and then sort of fire everyone between senior management and middle management, because they’re the people in their 50s and they’re too expensive. Then, five years down the track, ten years down the track, make huge organizational mistakes, because there wasn’t a knowledge transfer, and they weren’t the people who’d gone through that mistake, the time that it happened 15 years before. There are so many ways. There have been a lot of studies about the economic and the business value of diversity. I have been part of teams – many different teams, I guess, I should say. I've felt, myself, that the more diverse a team you have, the better a solution that you can arrive to.

Talk about getting more value out of your business this way. I mean, that in a positive way.

Tim: Yes. During the research and I wrote a blogpost on this, did a lot of time for research I've done, because I found that wasn’t necessarily an easy starting point to say how do I become better informed on this issue kind of fairly quickly. Unconscious bias with something that I wasn’t really that aware of as a topic, but quickly...

jam: Well, it is called “unconscious” bias.

Tim: Yes. I guess I get some sort of pass in that. Yes. Really, it’s something unless you are aware of the fact that we all hold these biases unconsciously, and they influence our behavior, our decisions, how we interpret the world unless you're willing to proactively engage with that and think that, as an organization collectively, as well as an individual, this is something that’s going to be even if you're thinking, actually, “I'm not going to be actively trying not to discriminate.” Unless you're a little bit more aware of what could be going on underneath, some of the signals that you could be sending with some of the smaller decisions you could be making are going to be having an influence. That was something that I realized had an impact on our work. We do things that user experience research, design. That’s a very subjective processes where we’re making very subjective decisions all the time. Without that awareness of how those decisions could be influenced, we were probably making poorer quality decisions, we were making decisions that would be – I guess our default or comfort decisions without having really potentially understood the problems space or the possibilities of what we could be doing. That was something I felt, actually, as a company, we really should be training for unless some companies will have management in HR, in hiring roles, trained in unconscious bias; but, actually, it was something that I realized that we were creating solutions to be used in a wider world, often using significant parts of our judgment and evidence wherever we can. Judgment, even the decisions you make about where you're going to research, you're going to have an impact. That was something that I realized I felt really strongly that, actually, we would benefit from as a company. It would produce the quality of our work, but it would also create a fairer workplace, a fairer culture, I guess, too.

Is Diversity a UX Challenge?

jam: Actually, while you were saying that, I thought to myself thinking of this as a UX challenge might be a really useful paradigm. There are several things about UX that come to mind, completely and spontaneously now: Some great UX practitioners that I've met are the people who can walk into work every day and look at the same interface and never get comfortable with it. Never get used to that workflow that you have to do that one extra thing that’s really uncomfortable there. They’re never satisfied with that. I, and I think most people, just learn how to click through whatever you do in a day and get on with it. Fighting consciously against unconscious bias by remaining as open and as perceptive as possible, that sounds really great. Then, I suppose if you somehow designed it as a process, then you could then quite well – you could really proactively look for a great user experience of your organization, right?

Tim: Yes. It’s a good way to think of company culture and continuous improvement, and that process might say that you are kind of never done. This isn’t a problem that is going to go away. No matter what we do as a company, we can’t solve, the industry isn’t going to fix it or society isn’t going to fix overnight. The problem with that is it can lead to apathy or that kind of stagnation of like, “Yes, this is terrible. What are you going to do?” Then everyone moves on to the next thing. And actually, I guess what I realized is that it’s made up – there are millions of small things that made this up. Actually, you can nudge change by doing things that you can control. It’s not something that’s just – we’re not going to fix it overnight. That’s even more of a reason to do something, because it’s not that kind of problem. It’s not – we can’t just make a decision and make it go away.

jam: Nikki Stevens, keynoted Drupal Costa Rica 2016, and in her keynote, she talked about – it was largely about diversity, but also about community and software. She pointed out that any improvement that you can make, no matter how small, even if it’s only for your local community, makes the world better, makes Drupal, in our case, better. That’s a great point that no matter how small a change you make today, it still adds up to making a difference. I like that.

Tim: Yes. That could have an impact even if it positively impacts one person’s life. There is this ripple effect. It does prompt change and reflection in people that could influence the rest of their lives. That’s kind of – there’s something really powerful. It doesn’t often always get – it can get lost in the big company. You kind of - PR kind of spun version of how do you address this kind of thing. It loses the fact that there’s a very – there’s a evolutionary kind of iterative element to this that’s about raising awareness, like it’s not about people being often not be about people being bad or wrong. It’s about just how do you keep nudging this stuff in the right direction rather than just doing one thing and then disappearing for another five years.

jam: Compare being passively happily open to everyone and accepting of everything and “if you come to us, you’ll have a great experience!” Compare that, which I imagine your state was a year or two ago, to proactively “We want to make Deeson a better company, and one of the measurements that we are going to take for our company’s health and success is our diversity,” so, the passive versus the active.

Tim: Yes. It’s easy to think of, “I don’t consciously discriminate. Therefore, we don’t have a problem.” And just turn around and walk away. That was really the state...

jam: Tim doesn’t have a problem. You have a problem.

Tim: Exactly. It’s realizing that the problem is much more kind of insipid [insidious] in that in a way. It’s kind of that unconscious bias, I guess, is baked into us as a society and us, as humans, that we just carry these biases with us. That kind of blissful ignorance that we were kind of in before, I guess, “We’re sure we’re not actively doing anything kind of harmful, therefore, we’re fine.” Once I started to gain more awareness, I guess, and realize that that just didn't really cut it. That by even our unconscious actions or our kind of how our careers page was written, for example, would be sending strong signals to candidates about who was welcome or not within the company. If you have that stereotypical kind of startup ...

jam: “We want rock stars and ninjas and senior, super senior developers!”

Tim: Exactly. The photos of six guys who are paying pool, late at night, drinking beer is where you’ve probably started to send the message for people with families. They if don’t want to see their families any more than they’re ... It’s that kind, you just start to go, actually, maybe if you're a woman and you don’t want to spend the rest of your career surrounded only by men in their 20s, you’ve made an age point, you’ve already started to set it to indicate who’s welcome here, who fits in, who’s kind of the default and who are you. Thinking about how prominently you talk about parental leave, for example, because if you're not talking about parental leave, at all during recruitment, then you really are probably aiming at much of the younger end of the market. There are all these sorts of things. One thing I found is really interesting about – so, we just have a big push on open source contributions. Like if you're being hired for a technical role, we really want to see that you’ve been active in the open source community. What I realized how that could be quite discriminatory, potentially, is if you were, say, a single mum, you're not going to have had – potentially, you're not going to have had the time or the money to be doing loads of free work on open source code, because you're bringing up a family and working to support them.

jam: Or you might be a great developer of any age, or whatever, who had an employer who didn't include it or permit it at all. And you have a family or you have a hobby. You have an actual life (I wish I knew what that was like. No, I'm kidding.) That’s a great point. The idea of even how you – like what photos you put on your website to talk about your own company, that’s really...

Tim: And what language you use, does it feel like you're competitive, adversarial like you're going to be “top gun” style, or is it about we support people to do their very best. Some really interesting studies that show how that language, how that kind of language will be stereotypically responded to by men versus women, for example.

jam: Sure. In the very early days of Acquia, and I mean I've been in Acquia for eight years now, some of this ... now the statute of limitations has expired. We had “rock star”, “ninja” hiring language on that page. I know, because I had conversations with people at DrupalCon and what have you ... I had conversations with people at DrupalCon who had said - amazing people. People that would have been great at that phase at Acquia. “Oh, well, I don’t think – I couldn’t come as a rock star. I don’t think I could ever apply to Acquia.”

Tim: Yes. You end up with self-fulfilling prophecies. You hire more and more people like the people you have because it’s a marketing test. Your recruitment, your marketing appeals to a certain type of person, which means it attracts a certain type of person, which means you create a culture which has a certain type of person. That’s often could be a narrow slice of the variety of people that would have really much – bring a lot of benefit to the company, different perspectives that can stop the kind of very narrow groupthink, I think.

jam: To your point, because it’s unconscious bias, often ... “That sort of just happened to us and we don’t know why, because we would be really open to having everybody, right?”

Tim: But no one applies which – that was kind of one of the points I made in the blog post was around the kind of pipeline problem. “It’s the pipeline. We don’t get applicants, so what can we do?” Actually, one of those issues is that partly you don’t get those applicants because you only appeal to one type of person. You’ve made it clear that it’s only a safe, welcoming place for certain types of people because every single piece of your marketing says that ... unintentionally and unconsciously. That was the other part, just by raising awareness, it’s very rarely a kind of right or wrong cultural decision to make about these things. It’s that raising the awareness and prompting the debates internally, started to change our culture in terms of growing awareness and how an impact of certain language or certain environment choices could have an impact to people. One of the things was the use of kind of “guy”s where we’re talking to a group of people that may or may not include all men.

jam: The word “guys”. You have just hit on one of my biggest pet peeves. Time out everyone. Land at London Heathrow, any day of the week, especially if you come from a big international flight and the Terminal 5 is really full, you have all those helpful people standing around yelling at you, right? An aircraft - 300, 800 adults, well enough dressed, tired, jetlagged, honestly, the last thing I want to hear – like how about, “Excuse me, ladies and gentlemen. Please go down this way.” I don’t need to be served, right? “Yes, excuse me. Everyone move this way.” All these things would be great. I'm sure there are other good options. Instead, what is it? “Guys, down this way. Yes. Guys, please, move along. Guys.”

Tim: Yes.

jam: It does my head in that “guys” is now the formal way to address a group. And thanks to Tim Deeson, I now know that this is like a symptom of this unconscious bias, as well.

Tim: Sometimes that debate can get derailed into do I find it offensive or not offensive? In my experience, people generally don’t. It’s actually about sending this really subtle, tiny signal that, actually, the default of people we’re talking to are men in these situations.

jam: Plus, I have to say that offense is, actually, not a yardstick to measure by, because it’s very subjective and very emotional and it doesn’t exactly matter. Stephen Fry talks a lot about the concept of “your offense is not my business”. That’s a really interesting point, as well.

Tim: I mean, what’s unfortunate about it is often it can kind of veer into this political correctness kind of issue.

jam: And apologies. So, we’re not talking about anything that needs apologizing for. You were doing something to fix it now, right?

Tim: Yes.

jam: I don’t like long, self-flagellating kind of conversations about this stuff either. That’s a fair point.

So where to now?

jam: Tell me, have you formulated a goal for Deeson in terms of diversity? Is there something – is there a simple statement that you’ve got?

Tim: The end of the book I published 10 things we were going to start doing, start doing basically. For example, only attending conferences that have a credible code of conduct, for example, to ensure that there was consideration of what we’re creating inclusive positive environment for all participants rather than - where in conferences, I mean, some of the reading I did ... If you don’t do the reading, it can sound kind of is this really that big a deal? But, particularly in the US, there’s been incredibly serious, incredibly common incidents to industry conferences where, actually, there is no real consideration paid to kind of large chunks of the audience collectively. We worked out ... I personally don’t believe in targeting percentages, for example, because it can create all sorts of problems. What I found was, actually, it was about the environment we were creating rather than about kind of absolute numbers. You can use percentages as a kind of dipstick of, “Does this feel it represents the communities we’re in” Actually, it was about if we make sure there our recruiting and marketing makes it clear that we’re a welcoming and inclusive environment to everyone, not just the people in a very small group or very narrow group, those kind of knock-on effects, I guess, the behaviors that we take and undertake rather than the kind of being particularly attached to specific outcomes. The outcomes will come through. What you don’t want, I guess, is just to try and force through once we hit certain percentages, then that’s what good looks like and can actually have a change in a few behaviors. That’s where you can end up with really troublesome – you're not going...

jam: Those are those old conversations about a token woman or a token of whathaveyou.

Tim: Exactly.

jam: Rather than a concrete statement or a concrete goal, would it be fair to say that you have a process that you are executing on every day and that your end goal is simply improvement on this area?

Tim: Yes. It’s the only way to think about it is continuous improvement and the awareness raising. It’s thinking about the kind of issues, having that awareness of issues just makes you prompt some conversations that don’t otherwise happen.

jam: Okay. I’ll link to that blogpost for sure. I’ll probably quote your 10 points in the post with this conversation.

Tim, thank you for taking the time to talk with me. I really admire your moment of revelation, per se, but, especially that you're acting on it. It would be really cool if we check in again on this at an apropos moment.

Tim: Of course. Thanks so much for having me.

jam: Great. Thanks, Tim.

Tim: Cheers.

Skill Level: BeginnerIntermediateAdvanced
Categories: FLOSS Project Planets

Lullabot: Using the serialization system in Drupal

Planet Drupal - Wed, 2017-02-15 11:00

As part of the API first initiative I have been working a lot with the serialization module. This module is a key member of the web-service-oriented modules present both in core and contrib.

The main focus of the serialization module is to encapsulate Symfony's serialization component. Note that there is no separate deserialization component. This single component is in charge of serializing and deserializing incoming data.

When I started working with this component the first question that I had was "What does serialize mean? And how is it different from deserializing?". In this article I will try to address this question and give a brief introduction on how to use it in Drupal 8.

Serializers encoders and normalizers

Serialization is the process of normalizing and then encoding an input object. Similarly, we refer to deserialization as the process of decoding and then denormalizing an input string. Encoding and decoding are the reverse processes of one another, just like normalizing and denormalizing are.

In simple terms, we want to be able to turn an object of class MyClass into a particular string representation, and then be able to turn that string back into the original object.

An encoder is in charge of converting simple data—a set of scalars, arrays and stdClass objects—into a string. The resulting string is a convenient way to store or transport the original object. A decoder performs the opposite function; it will take that encoded string and transform it into an array that’s ready to use. json_encode and json_decode are good examples of a commonly used (de)encoder. XML is another example of a format to encode to. Note that for an object to be correctly encoded it needs to be normalized first. Consider the following example where we encode and decode an object without any normalization or denormalization.

class MyClass {} $obj = new MyClass(); var_dump($obj); // Outputs: object(MyClass) (0) {} var_dump(json_decode(json_encode($obj))); // Outputs: object(stdClass) (0) {}

You can see in the code above that the composition of the two inverse operations is not the same original object of type MyClass. This is because the encoding operation loses information if the input data is not a simple set of scalars, arrays, and stdClass objects. Once that information is lost, the decoder cannot get it back.

undefined

One of the reasons why we need normalizers and denormalizers is to make sure that data is correctly simplified before being turned into a string. It also needs to be upcast to a typed object after being parsed from a string. Another reason is that different (de)normalizers allow us to work with different formats of the data. In the REST subsystem we have different normalizers to transform a Node object into the JSON, HAL or JSON API formats. Those are JSON objects with different shapes, but they contain the same information. We also have different denormalizers that will take a simplified JSON, HAL or JSON API payload and turn it into a Node object.

(De)Normalization in Drupal

The normalization of content entities is a very convenient way to express the content in a particular format and shape. So formatted, the data can be exported to other systems, stored as a text-based document, or served via an HTTP request. The denormalization of content entities is a great way to import content into your Drupal site. Normalization and denormalization can also be combined to transform a document from one format to another. Imagine that we want to transform a HAL document into a JSON API document. To do so, you need to denormalize the HAL input into a Node object, and then normalize it into the desired JSON API document.

A good example of the normalization process is the Data Model module. In this case instead of normalizing content entities such as nodes, the module normalizes the Typed Data definitions. The typed data definitions are the internal Drupal objects that define the schemas of the data for things like fields and properties. An integer field will contain a property (the value property) of type IntegerData. The Data Model module will take object definitions and simplify (normalize) them. Then they can be converted to a string following the JSON Schema format to be used in external tools such as beautiful documentation generators. Note how a different serialization could turn this typed data into a Markdown document instead of JSON Schema string.

Adding a new (de)normalizer to the system

In order to add a new normalizer to the system you need to create a new tagged service in custom_module.services.yml.

serializer.custom_module.my_class_normalizer: class: Drupal\custom_module\Normalizer\MyClassNormalizer tags: - { name: normalizer, priority: 25 }

The class for this service should implement the normalization interface in the Symfony component Symfony\Component\Serializer\Normalizer\NormalizerInterface. This normalizer service will be in charge of declaring which types of objects it knows how to normalize and denormalize—that would be MyClass in our previous example. This way the serialization module uses it when an object of type MyClass needs to be (de)normalized. Since multiple modules may provide a service that supports normalizing MyClass objects, the serialization module will use the priority key in the service definition to resolve the normalizer to be used.

As you would expect, in Drupal you can alter and replace existing normalizers and denormalizers so they provide the output you need. This is very useful when you are trying to alter the output of the JSON API, JSON or HAL web services.

In a next article I will delve deeper into how to create a normalizer and a denormalizer from scratch, by creating an example module that (de)normalizes nodes.

Conclusion

The serialization component in Symfony allows you to deal with the shape of the data. It is of the utmost importance when you have to use Drupal data in an external system that requires the data to be expressed in a certain way. With this component, you can also perform the reverse process and create objects in Drupal that come from a text representation.

In a following article I will show you an introduction on how to actually work with (de)normalizers in Drupal.

Categories: FLOSS Project Planets

FSF Blogs: Friday Free Software Directory IRC meetup: February 17th starting at 12 p.m. EST/17:00 UTC

GNU Planet! - Wed, 2017-02-15 10:50

Participate in supporting the FSD by adding new entries and updating existing ones. We will be on IRC in the #fsf channel on irc.freenode.org.

Tens of thousands of people visit directory.fsf.org each month to discover free software. Each entry in the FSD contains a wealth of useful information, from basic category and descriptions, to providing detailed info about version control, IRC channels, documentation, and licensing info that has been carefully checked by FSF staff and trained volunteers.

While the FSD has been and continues to be a great resource to the world over the past decade, it has the potential of being a resource of even greater value. But it needs your help!

This week we will continue to hammer away at new entries awaiting approval. We've returned to this theme because one week was not enough to verify all of the new packages that are awaiting approval. Working on these entries helps ensure that the FSD keeps pace with the ever expanding world of free software and remains a valuable tool for free software users and developers.

If you are eager to help and you can't wait or are simply unable to make it onto IRC on Friday, our participation guide will provide you with all the information you need to get started on helping the FSD today! There are also weekly FSD Meetings pages that everyone is welcome to contribute to before, during, and after each meeting.

Categories: FLOSS Project Planets
Syndicate content