FLOSS Project Planets

Roberto Alsina: Episodio 10: Una Línea

Planet Python - Fri, 2019-10-11 14:00

Una línea. Estás viendo código y te cruzás con una línea que ... ¿que miércoles es esa línea? ¿Con qué se come? ¿Qué se hace?

Expresiones regulares, intentos de explicaciones poco convincentes y más!

Categories: FLOSS Project Planets

Drupal.org blog: What's new on Drupal.org? - September 2019

Planet Drupal - Fri, 2019-10-11 13:09

Read our Roadmap to understand how this work falls into priorities set by the Drupal Association with direction and collaboration from the Board and community.

Note from the author: "I'm back in the hot seat! As many of you know, I took on the role of interim executive director in September 2018 while the Drupal Association underwent an executive search. This summer we found a fantastic new leader in Heather Rocker, and now that she's had a few months to settle in, I'm able to return to my regular duties, like bringing you these updates. Thanks for your patience!"

- Tim Lehnen(hestenet)

September was a flurry of activity here at the Drupal Association at large. Coming off a season of travel to a number of Drupal events, we headed straight into our semi-annual staff off-site here in Portland, and followed that up by attending Google's second-ever CMS Leadership summit.

Despite the whirlwind of events taking place in September, we've also landed some major milestones on our roadmap, and are hard at work getting some exciting things together to talk about with you all at DrupalCon Amsterdam at the end of October. As an added bonus, this month's report includes a short retrospective about the impact of the GitLab migration on our maintenance work. 

Project News Composer Initiative work committed for release in Drupal 8.8.0

 

A major community initiative for Drupal 8.8.0 has been the push to make Drupal's internal scaffolding and filetree consistent, whether you start using Drupal by using the .zip download, or by using Composer from the get-go. Starting with Drupal 8.8.0 - no matter how you start your journey with Drupal, you'll be ready to use Composer's advanced dependency management when you need it.

Drupal Association engineering team member Ryan Aslett(mixologic) has been the initiative lead for this effort for more than a year. We're thrilled that his work and the work of many volunteers has been committed for release in Drupal 8.8.0!

We want to thank the following contributors for participating in this initiative in collaboration with us: 

The work is not over! There are still a number of clean ups and refinements being worked on in the Drupal core queue, and the Drupal Association team is working hard in October to ensure that Drupal.org itself will be ready to deliver these Composer-ready packages of Drupal 8.8.0 on release. 

Reminder: Drupal 8.8.0 is coming soon! 

Speaking of Drupal 8.8.0 - it enters the alpha phase during the week of October the 14th, in preparation for release in December of this year.

Drupal 8.8.0 is the last minor release of Drupal before the simultaneous release of Drupal 8.9.0 and 9.0.0 next year. You can find more information about the Drupal release cycle here.

If you want to help ensure a smooth release, we invite you to join the Drupal Minor Release beta testing program.

Drupal.org Update Preparing our infrastructure for Automatic Updates

In September we spent a good amount of time outlining the architectural requirements that will need to be met in order to support delivering the update packages that are part of the Automatic Updates initiative.

We are only in the first phase of this initiative, which focuses on: 1) Informing site owners of upcoming critical releases, 2) Providing readiness checks that site owners can use to validate they are ready to apply an update, and 3) offering in-place automatic updates for a small subset of use-cases (critical security releases).

As this initiative progresses, and begins to cover more and more use cases, it should greatly reduce TCO for site owners, and friction for new adopters. However, to make that forward progress we are seeking sponsors for the second phase of work.

Readying our secure signing infrastructure

With the help of a number of community contributors (see below), a new architecture for a highly secure signing infrastructure has been laid out. As we roll into Q4 we'll get ready to stand this new infrastructure up and begin securing the first automatic updates packages.

Going into early October, a number of contributors came together at BadCamp to help advance this effort further. Without the collaboration between community members and Drupal Association staff, these initiatives would not be possible.

We'd like to thank the following contributors to the Automatic Updates/Secure Signing Infrastructure initiative: 

Supporting Drupal 9 readiness testing

In conjunction with the Drupal core team, the DA engineering team has been supporting the work to ensure that contributed projects are ready for the release of Drupal 9.

Early testing has shown that over 54% of projects compatible with Drupal 8 are *already* Drupal 9 ready, and we'll be continuing to work with the core team to get out the word about how to update the modules that are not yet compatible.

Infrastructure Update A brief retrospective on the GitLab migration

Drupal.org's partnership with GitLab to provide the tooling for Drupal and the ~40,000 contributed projects hosted on Drupal.org has been a significant step forward for our community. We're no longer relying on our own, home-brew git infrastructure for the project, and we're gradually rolling out more powerful collaboration tools to move the project forward. 

But what has that meant in terms of maintenance work for the Drupal Association engineering team?

There was some hope as we were evaluating tooling providers that making a switch would almost entirely eliminate the maintenance and support burden. While that was a hopeful outlook, the reality is that maintaining 'off-the-shelf' software can be at least as much work as maintaining mature existing tools.

GitLab in particular is still iterating at a tremendously rapid pace, releasing updates and new features every month. However, that speed of development has also meant frequent maintenance and security releases, meaning the DA team has had to update our GitLab installation almost once a week in some months.

Does that mean we're unhappy with the change? Absolutely not! We're still thrilled to be working with the GitLab team, and are excited about the new capabilities this partnership unlocks for the Drupal community (with more coming soon!).

However, it is a good lesson to anyone running a service for a large community that there's no free lunch - and a great reminder of why the support of Drupal Association members and supporting partners is so essential to our work.

———

As always, we’d like to say thanks to all the volunteers who work with us, and to the Drupal Association Supporters, who make it possible for us to work on these projects. In particular, we want to thank:

  • Tag1 - Renewing Signature Supporting Partner
  • Gitlab - *NEW* Premium Technology Supporter
  • Four Kitchens - Renewing Premium Supporting Partner
  • Phase2 - Renewing Premium Supporting Partner
  • WebEnertia - Renewing Premium Supporting Partner
  • Thunder - Renewing Premium Supporting Partner
  • Palantir -Renewing Premium Supporting Partner
  • Specbee - Renewing Premium Supporting Partner 
  • Pantheon - Renewing Premium Hosting Supporter
  • Cyber-Duck - *NEW* Classic Supporting Partner
  • Websolutions Agency - *NEW* Classic Supporting Partner
  • Unic - *NEW* Classic Supporting Partner
  • Kalamuna - Renewing Classic Supporting Partner 
  • ThinkShout - Renewing Classic Supporting Partner 
  • Amazee - Renewing Classic Supporting Partner 
  • Access - Renewing Classic Supporting Partner 
  • Studio Present - Renewing Classic Supporting Partner 
  • undpaul- Renewing Classic Supporting Partner 
  • Mediacurrent - Renewing Classic Supporting Partner 
  • Appnovation - Renewing Classic Supporting Partner 
  • Position2 - Renewing Classic Supporting Partner 
  • Kanopi Studios - Renewing Classic Supporting Partner 
  • Deeson - Renewing Classic Supporting Partner 
  • GeekHive - Renewing Classic Supporting Partner 
  • OpenSense Labs - Renewing Classic Supporting Partner 
  • Synetic - Renewing Classic Supporting Partner 
  • Axelerant - Renewing Classic Supporting Partner 
  • Centretek - Renewing Classic Supporting Partner 
  • PreviousNext - Renewing Classic Supporting Partner 
  • UniMity Solutions - Renewing Classic Supporting Partner 
  • Code Koalas - Renewing Classic Supporting Partner 
  • Vardot - Renewing Classic Supporting Partner 
  • Berger Schmidt - Renewing Classic Supporting Partner 
  • Authorize.Net - Renewing Classic Technology Supporter
  • JetBrains - Renewing Classic Technology Supporter
  • GlowHost - Renewing Classic Hosting Supporter
  • Sevaa - Renewing Classic Hosting Supporter
  • Green Geeks - Renewing Classic Hosting Supporter

If you would like to support our work as an individual or an organization, consider becoming a member of the Drupal Association.

Follow us on Twitter for regular updates: @drupal_org, @drupal_infra

Categories: FLOSS Project Planets

TEN7 Blog's Drupal Posts: We Give Back: Drupal Module Framework for Migrations

Planet Drupal - Fri, 2019-10-11 10:52

We recently completed a special data integration project for the University of Minnesota, Office of the Executive Vice President and Provost, Faculty and Academic Affairs. Faculty Affairs, as they are referred to, uses a product called Activity Insight from Digital Measures (referred to internally as “Works”) to capture and organize faculty information.

Categories: FLOSS Project Planets

PyCharm: Webinar Preview: “Debugging During Testing” tutorial steps for React+TS+TDD

Planet Python - Fri, 2019-10-11 09:24

As a reminder… next Wednesday (Oct 16) I’m giving a webinar on React+TypeScript+TDD in PyCharm. I’m doing some blog posts about material that will be covered.

See the first blog post for some background on this webinar and its topic.

Spotlight: Debugging During Testing

I often speak about “visual debugging” and “visual testing”, meaning, how IDEs can help put these intermediate concepts within reach using a visual UI.

For testing, sometimes our code has problems that require investigation with a debugger. For React, that usually means a trip to the browser to set a breakpoint and use the Chrome developer tools. In Debugging During Testing With NodeJS we show how the IDE’s debugger, combined with TDD, can make this investigation far more productive. In the next step we show how to do so using Chrome as the execution environment.

Here’s the first video:

Then the second step, showing debugging in Chrome:

TDD is a process of exploration, as well as productive way to write code while staying in the “flow”. The debugger helps on both of those and is an essential tool during test writing.

Categories: FLOSS Project Planets

Stack Abuse: Autoencoders for Image Reconstruction in Python and Keras

Planet Python - Fri, 2019-10-11 08:10
Introduction

Nowadays, we have huge amounts of data in almost every application we use - listening to music on Spotify, browsing friend's images on Instagram, or maybe watching an new trailer on YouTube. There is always data being transmitted from the servers to you.

This wouldn't be a problem for a single user. But imagine handling thousands, if not millions, of requests with large data at the same time. These streams of data have to be reduced somehow in order for us to be physically able to provide them to users - this is where data compression kicks in.

There're lots of compression techniques, and they vary in their usage and compatibility. For example some compression techniques only work on audio files, like the famous MPEG-2 Audio Layer III (MP3) codec.

There are two main types of compression:

  • Lossless: Data integrity and accuracy is preferred, even if we don't "shave off" much
  • Lossy: Data integrity and accuracy isn't as important as how fast we can serve it - imagine a real-time video transfer, where it's more important to be "live" than to have high quality video

For example, using Autoencoders, we're able to decompose this image and represent it as the 32-vector code below. Using it, we can reconstruct the image. Of course, this is an example of lossy compression, as we've lost quite a bit of info.

Though, we can use the exact same technique to do this much more accurately, by allocating more space for the representation:

What are Autoencoders?

An autoencoder is, by definition, a technique to encode something automatically. By using a neural network, the autoencoder is able to learn how to decompose data (in our case, images) into fairly small bits of data, and then using that representation, reconstruct the original data as closely as it can to the original.

There are two key components in this task:

  • Encoder: Learns how to compress the original input into a small encoding
  • Decoder: Learns how to restore the original data from that encoding generated by the Encoder

These two are trained together in symbiosis to obtain the most efficient representation of the data that we can reconstruct the original data from, without losing so much of it.

Credit: ResearchGate

Encoder

The Encoder is tasked with finding the smallest possible representation of data that it can store - extracting the most prominent features of the original data and representing it in a way the decoder can understand.

Think of it as if you are trying to memorize something, like for example memorizing a large number - you try to find a pattern in it that you can memorize and restore the whole sequence from that pattern, as it will be easy to remember shorter pattern than the whole number.

Encoders in their simplest form are simple Artificial Neural Networks (ANNs). Though, there are certain encoders that utilize Convolutional Neural Networks (CNNs), which is a very specific type of ANN.

The encoder takes the input data and generates an encoded version of it - the compressed data. We can then use that compressed data to send it to the user, where it will be decoded and reconstructed. Let's take a look at the encoding for a LFW dataset example:

The encoding here doesn't make much sense for us, but it's plenty enough for the decoder. Now, it's valid to raise the question:

"But how did the encoder learn to compress images like this?

This is where the symbiosis during training comes into play.

Decoder

The Decoder works in a similar way to the encoder, but the other way around. It learns to read, instead of generate, these compressed code representations and generate images based on that info. It aims to minimize the loss while reconstructing, obviously.

The output is evaluated by comparing the reconstructed image by the original one, using a Mean Square Error (MSE) - the more similar it is to the original, the smaller the error.

At this point, we propagate backwards and update all the parameters from the decoder to the encoder. Therefore, based on the differences between the input and output images, both the decoder and encoder get evaluated at their jobs and update their parameters to become better.

Building an Autoencoder

Keras is a Python framework that makes building neural networks simpler. It allows us to stack layers of different types to create a deep neural network - which we will do to build an autoencoder.

First, let's install Keras using pip:

$ pip install keras Preprocessing Data

Again, we'll be using the LFW dataset. As usual, with projects like these, we'll preprocess the data to make it easier for our autoencoder to do its job.

For this, we'll first define a couple of paths which lead to the dataset we're using:

# http://www.cs.columbia.edu/CAVE/databases/pubfig/download/lfw_attributes.txt ATTRS_NAME = "lfw_attributes.txt" # http://vis-www.cs.umass.edu/lfw/lfw-deepfunneled.tgz IMAGES_NAME = "lfw-deepfunneled.tgz" # http://vis-www.cs.umass.edu/lfw/lfw.tgz RAW_IMAGES_NAME = "lfw.tgz"

Then, we'll employ two functions - one to convert the raw matrix into an image and change the color system to RGB:

def decode_image_from_raw_bytes(raw_bytes): img = cv2.imdecode(np.asarray(bytearray(raw_bytes), dtype=np.uint8), 1) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) return img

And the other one to actually load the dataset and adapt it to our needs:

def load_lfw_dataset( use_raw=False, dx=80, dy=80, dimx=45, dimy=45): # Read attrs df_attrs = pd.read_csv(ATTRS_NAME, sep='\t', skiprows=1) df_attrs = pd.DataFrame(df_attrs.iloc[:, :-1].values, columns=df_attrs.columns[1:]) imgs_with_attrs = set(map(tuple, df_attrs[["person", "imagenum"]].values)) # Read photos all_photos = [] photo_ids = [] # tqdm in used to show progress bar while reading the data in a notebook here, you can change # tqdm_notebook to use it outside a notebook with tarfile.open(RAW_IMAGES_NAME if use_raw else IMAGES_NAME) as f: for m in tqdm.tqdm_notebook(f.getmembers()): # Only process image files from the compressed data if m.isfile() and m.name.endswith(".jpg"): # Prepare image img = decode_image_from_raw_bytes(f.extractfile(m).read()) # Crop only faces and resize it img = img[dy:-dy, dx:-dx] img = cv2.resize(img, (dimx, dimy)) # Parse person and append it to the collected data fname = os.path.split(m.name)[-1] fname_splitted = fname[:-4].replace('_', ' ').split() person_id = ' '.join(fname_splitted[:-1]) photo_number = int(fname_splitted[-1]) if (person_id, photo_number) in imgs_with_attrs: all_photos.append(img) photo_ids.append({'person': person_id, 'imagenum': photo_number}) photo_ids = pd.DataFrame(photo_ids) all_photos = np.stack(all_photos).astype('uint8') # Preserve photo_ids order! all_attrs = photo_ids.merge(df_attrs, on=('person', 'imagenum')).drop(["person", "imagenum"], axis=1) return all_photos, all_attrs Implementing the Autoencoder import numpy as np X, attr = load_lfw_dataset(use_raw=True, dimx=32, dimy=32)

Our data is in the X matrix, in the form of a 3D matrix, which is the default representation for RGB images. By providing three matrices - red, green, and blue, the combination of these three generate the image color.

These images will have large values for each pixel, ranging from 0 to 255. Generally in machine learning we tend to make values small, and centered around 0, as this helps our model train faster and get better results, so let's normalize our images:

X = X.astype('float32') / 255.0 - 0.5

By now if we test the X array for the min and max it will be -.5 and .5, which you can verify:

print(X.max(), X.min()) 0.5 -0.5

To be able to see the image, let's create a show_image function. It will add 0.5 to the images as the pixel value can't be negative:

import matplotlib.pyplot as plt def show_image(x): plt.imshow(np.clip(x + 0.5, 0, 1))

Now let's take a quick look at our data:

show_image(X[6])

Great, now let's split our data into a training and test set:

from sklearn.model_selection import train_test_split X_train, X_test = train_test_split(X, test_size=0.1, random_state=42)

The sklearn train_test_split() function is able to split the data by giving it the test ratio and the rest is, of course, the training size. The random_state, which you are going to see a lot in machine learning, is used to produce the same results no matter how many times you run the code.

Now time for the model:

from keras.layers import Dense, Flatten, Reshape, Input, InputLayer from keras.models import Sequential, Model def build_autoencoder(img_shape, code_size): # The encoder encoder = Sequential() encoder.add(InputLayer(img_shape)) encoder.add(Flatten()) encoder.add(Dense(code_size)) # The decoder decoder = Sequential() decoder.add(InputLayer((code_size,))) decoder.add(Dense(np.prod(img_shape))) # np.prod(img_shape) is the same as 32*32*3, it's more generic than saying 3072 decoder.add(Reshape(img_shape)) return encoder, decoder

This function takes an image_shape (image dimensions) and code_size (the size of the output representation) as parameters. The image shape, in our case, will be (32, 32, 3) where 32 represent the width and height, and 3 represents the color channel matrices. That being said, our image has 3072 dimensions.

Logically, the smaller the code_size is, the more the image will compress, but the less features will be saved and the reproduced image will be that much more different from the original.

A Keras sequential model is basically used to sequentially add layers and deepen our network. Each layer feeds into the next one, and here, we're simply starting off with the InputLayer (a placeholder for the input) with the size of the input vector - image_shape.

The Flatten layer's job is to flatten the (32,32,3) matrix into a 1D array (3072) since the network architecture doesn't accept 3D matrices.

The last layer in the encoder is the Dense layer, which is the actual neural network here. It tries to find the optimal parameters that achieve the best output - in our case it's the encoding, and we will set the output size of it (also the number of neurons in it) to the code_size.

The decoder is also a sequential model. It accepts the input (the encoding) and tries to reconstruct it in the form of a row. Then, it stacks it into a 32x32x3 matrix through the Dense layer. The final Reshape layer will reshape it into an image.

Now let's connect them together and start our model:

# Same as (32,32,3), we neglect the number of instances from shape IMG_SHAPE = X.shape[1:] encoder, decoder = build_autoencoder(IMG_SHAPE, 32) inp = Input(IMG_SHAPE) code = encoder(inp) reconstruction = decoder(code) autoencoder = Model(inp,reconstruction) autoencoder.compile(optimizer='adamax', loss='mse') print(autoencoder.summary())

This code is pretty straightforward - our code variable is the output of the encoder, which we put into the decoder and generate the reconstruction variable.

Afterwards, we link them both by creating a Model with the the inp and reconstruction parameters and compile them with the adamax optimizer and mse loss function.

Compiling the model here means defining its objective and how to reach it. The objective in our context is to minimize the mse and we reach that by using an optimizer - which is basically a tweaked algorithm to find the global minimum.

At this point, we can summarize the results:

_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_6 (InputLayer) (None, 32, 32, 3) 0 _________________________________________________________________ sequential_3 (Sequential) (None, 32) 98336 _________________________________________________________________ sequential_4 (Sequential) (None, 32, 32, 3) 101376 ================================================================= Total params: 199,712 Trainable params: 199,712 Non-trainable params: 0 _________________________________________________________________

Here we can see the input is 32,32,3. Note the None here refers to the instance index, as we give the data to the model it will have a shape of (m, 32,32,3), where m is the number of instances, so we keep it as None.

The hidden layer is 32, which is indeed the encoding size we chose, and lastly the decoder output as you see is (32,32,3).

Now, let's trade the model:

history = autoencoder.fit(x=X_train, y=X_train, epochs=20, validation_data=[X_test, X_test])

In our case, we'll be comparing the constructed images to the original ones, so both x and y are equal to X_train. Ideally, the input is equal to the output.

The epochs variable defines how many times we want the training data to be passed through the model and the validation_data is the validation set we use to evaluate the model after training:

Train on 11828 samples, validate on 1315 samples Epoch 1/20 11828/11828 [==============================] - 3s 272us/step - loss: 0.0128 - val_loss: 0.0087 Epoch 2/20 11828/11828 [==============================] - 3s 227us/step - loss: 0.0078 - val_loss: 0.0071 . . . Epoch 20/20 11828/11828 [==============================] - 3s 237us/step - loss: 0.0067 - val_loss: 0.0066

We can visualize the loss over epochs to get an overview about the epochs number.

plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.show()

We can see that after the third epoch, there's no significant progress in loss. Visualizing like this can help you get a better idea of how many epochs is really enough to train your model. In this case, there's simply no need to train it for 20 epochs, and most of the training is redundant.

This can also lead to over-fitting the model, which will make it perform poorly on new data outside the training and testing datasets.

Now, the most anticipated part - let's visualize the results:

def visualize(img,encoder,decoder): """Draws original, encoded and decoded images""" # img[None] will have shape of (1, 32, 32, 3) which is the same as the model input code = encoder.predict(img[None])[0] reco = decoder.predict(code[None])[0] plt.subplot(1,3,1) plt.title("Original") show_image(img) plt.subplot(1,3,2) plt.title("Code") plt.imshow(code.reshape([code.shape[-1]//2,-1])) plt.subplot(1,3,3) plt.title("Reconstructed") show_image(reco) plt.show() for i in range(5): img = X_test[i] visualize(img,encoder,decoder)





You can see that the results are not really good. However, if we take into consideration that the whole image is encoded in the extremely small vector of 32 seen in the middle, this isn't bad at all. Through the compression from 3072 dimensions to just 32 we lose a lot of data.

Now, let's increase the code_size to 1000:





See the difference? As you give the model more space to work with, it saves more important information about the image

Note: The encoding is not two-dimensional, as represented above. This is just for illustration purposes. In reality, it's a one dimensional array of 1000 dimensions.

What we just did is called Principal Component Analysis (PCA), which is a dimensionality reduction technique. We can use it to reduce the feature set size by generating new features that are smaller in size, but still capture the important information.

Principal component analysis is a very popular usage of autoencoders.

Image Denoising

Another popular usage of autoencoders is denoising. Let's add some random noise to our pictures:

def apply_gaussian_noise(X, sigma=0.1): noise = np.random.normal(loc=0.0, scale=sigma, size=X.shape) return X + noise

Here we add some random noise from standard normal distribution with a scale of sigma, which defaults to 0.1.

For reference, this is what noise looks like with different sigma values:

plt.subplot(1,4,1) show_image(X_train[0]) plt.subplot(1,4,2) show_image(apply_gaussian_noise(X_train[:1],sigma=0.01)[0]) plt.subplot(1,4,3) show_image(apply_gaussian_noise(X_train[:1],sigma=0.1)[0]) plt.subplot(1,4,4) show_image(apply_gaussian_noise(X_train[:1],sigma=0.5)[0])

As we can see, as sigma increases to 0.5 the image is barely seen. We will try to regenerate the original image from the noisy ones with sigma of 0.1.

The model we'll be generating for this is the same as the one from before, though we'll train it differently. This time around, we'll train it with the original and corresponding noisy images:

code_size = 100 # We can use bigger code size for better quality encoder, decoder = build_autoencoder(IMG_SHAPE, code_size=code_size) inp = Input(IMG_SHAPE) code = encoder(inp) reconstruction = decoder(code) autoencoder = Model(inp, reconstruction) autoencoder.compile('adamax', 'mse') for i in range(25): print("Epoch %i/25, Generating corrupted samples..."%(i+1)) X_train_noise = apply_gaussian_noise(X_train) X_test_noise = apply_gaussian_noise(X_test) # We continue to train our model with new noise-augmented data autoencoder.fit(x=X_train_noise, y=X_train, epochs=1, validation_data=[X_test_noise, X_test])

Now let's see the model results:

X_test_noise = apply_gaussian_noise(X_test) for i in range(5): img = X_test_noise[i] visualize(img,encoder,decoder)





Autoencoder Applications

There are many more usages for autoencoders, besides the ones we've explored so far.

Autoencoder can be used in applications like Deepfakes, where you have an encoder and decoder from different models.

For example, let's say we have two autoencoders for Person X and one for Person Y. There's nothing stopping us from using the encoder of Person X and the decoder of Person Y and then generate images of Person Y with the prominent features of Person X:

Credit: AlanZucconi

Autoencoders can also used for image segmentation - like in autonomous vehicles where you need to segment different items for the vehicle to make a decision:

Credit: PapersWithCode

Conclusion

Autoencoders can bed used for Principal Component Analysis which is a dimensionality reduction technique, image denoising and much more.

You can try it yourself with different dataset, like for example the MNIST dataset and see what results you get.

Categories: FLOSS Project Planets

Gary Benson: Python hacking

GNU Planet! - Fri, 2019-10-11 08:06

Python‘s had this handy logging module since July 2003. A lot of things use it, so if you’re trying to understand or debug some Python code then a handy snippet to insert somewhere is:

import logging logging.basicConfig(level=1)

Those two lines cause all loggers to log everything to the console. Check out the logging.basicConfig docs to see what else you could do.

Categories: FLOSS Project Planets

Kdenlive 19.08.2 is out

Planet KDE - Fri, 2019-10-11 04:00

Kdenlive 19.08.2 is out with many goodies ranging from usability and user interface improvements all the way to fixes to speed effect bugs and even a couple of crashes.

Check it out:

  • Fix crash on composition resize. Commit.
  • Update MSYS2 build script. Commit.
  • Fix Windows audio screen grab (#344). Commit.
  • Remove local reference to current project. Commit.
  • Disable multitrack view on render. Commit.
  • Fix clip duration incorrectly reset on profile change. Fixes #360. Commit.
  • Fix compile warnings. Commit.
  • Make affine filter bg color configurable. Fixes #343. Commit.
  • Fix speed job in some locales. Fixes #346. Commit.
  • Fix some remaining effectstack layout issues. Commit.
  • Fix keyframes not deleted when clip start is resized/cut. Commit.
  • Fix track effects not working when a clip is added at end of track or if last clip is resized. Commit.
  • Add clickable field to copy automask keyframes. Fixes #23. Commit.
  • Show track effect stack when clicking on it’s name. Commit.
  • Fix crash trying to access clip properties when unavailable. Commit.
  • Fix effectstack layout spacing issue introduced in recent commit. Commit.
  • Fix proxy clips lost on opening project file with relative path. Commit.
  • Update AppData version. Commit.
  • Cleanup effectstack layout. Fixes !58 #294. Commit.
  • Fix mixed audio track sorting. Commit. See bug #411256
  • Another fix for speed effect. Commit.
  • Speed effect: fix negative speed incorrectly moving in/out and wrong thumbnails. Commit.
  • Fix incorrect stabilize description. Commit.
  • Cleanup stabilize presets and job cancelation. Commit.
  • Deprecate videostab and videostab2, only keep vidstab filter. Commit.
  • Fix cancel jobs not working. Commit.
  • Fix some incorrect i18n calls. Commit.
  • Don’t hardcode vidstab effect settings. Commit.
Categories: FLOSS Project Planets

Test and Code: 90: Dynamic Scope Fixtures in pytest 5.2 - Anthony Sottile

Planet Python - Fri, 2019-10-11 03:00

pytest 5.2 was just released, and with it, a cool fun feature called dynamic scope fixtures. Anthony Sottile so tilly is one of the pytest core developers, so I thought it be fun to have Anthony describe this new feature for us.

We also talk about parametrized testing and really what is fixture scope and then what is dynamic scope.

Special Guest: Anthony Sottile.

Sponsored By:

Support Test & Code - Python Testing & Development

Links:

<p>pytest 5.2 was just released, and with it, a cool fun feature called dynamic scope fixtures. Anthony Sottile so tilly is one of the pytest core developers, so I thought it be fun to have Anthony describe this new feature for us.</p> <p>We also talk about parametrized testing and really what is fixture scope and then what is dynamic scope.</p><p>Special Guest: Anthony Sottile.</p><p>Sponsored By:</p><ul><li><a href="https://testandcode.com/raygun" rel="nofollow">Raygun</a>: <a href="https://testandcode.com/raygun" rel="nofollow">Detect, diagnose, and destroy Python errors that are affecting your customers. With smart Python error monitoring software from Raygun.com, you can be alerted to issues affecting your users the second they happen.</a></li></ul><p><a href="https://www.patreon.com/testpodcast" rel="payment">Support Test & Code - Python Testing & Development</a></p><p>Links:</p><ul><li><a href="https://pytest.org/en/latest/changelog.html" title="pytest changelog" rel="nofollow">pytest changelog</a></li><li><a href="https://docs.pytest.org/en/latest/fixture.html#scope-sharing-a-fixture-instance-across-tests-in-a-class-module-or-session" title="pytest fixtures" rel="nofollow">pytest fixtures</a></li><li><a href="https://docs.pytest.org/en/latest/fixture.html#dynamic-scope" title="dynamic scope fixtures" rel="nofollow">dynamic scope fixtures</a></li><li><a href="https://testandcode.com/82" title="episode 82: pytest - favorite features since 3.0 " rel="nofollow">episode 82: pytest - favorite features since 3.0 </a></li><li><a href="https://amzn.to/2QnzvUv" title="the pytest book" rel="nofollow">the pytest book</a> &mdash; Python Testing with pytest</li></ul>
Categories: FLOSS Project Planets

parted @ Savannah: parted-3.3 released [stable]

GNU Planet! - Thu, 2019-10-10 20:18

Parted 3.3 has been released.  This release includes many bug fixes and new features.

Here is Parted's home page:

    http://www.gnu.org/software/parted/

For a summary of all changes and contributors, see:
  https://git.savannah.gnu.org/cgit/parted.git/log/?h=v3.3

or run this command from a git-cloned parted directory:
  git shortlog v3.2..v3.3 (appended below)

Here are the compressed sources and a GPG detached signature[*]:
  http://ftp.gnu.org/gnu/parted/parted-3.3.tar.xz
  http://ftp.gnu.org/gnu/parted/parted-3.3.tar.xz.sig

Use a mirror for higher download bandwidth:
  http://ftpmirror.gnu.org/parted/parted-3.3.tar.xz
  http://ftpmirror.gnu.org/parted/parted-3.3.tar.xz.sig

[*] Use a .sig file to verify that the corresponding file (without the .sig suffix) is intact.  First, be sure to download both the .sig file and the corresponding tarball.  Then, run a command like this:

  gpg --verify parted-3.3.tar.xz.sig

If that command fails because you don't have the required public key, then run this command to import it:

  gpg --keyserver keys.gnupg.net --recv-keys 117E8C168EFE3A7F

and rerun the 'gpg --verify' command.

This release was bootstrapped with the following tools:
  Autoconf 2.69
  Automake 1.16.1
  Gettext 0.19.8.1
  Gnulib commit 6430babe47ece6953cf18ef07c1d8642c8588e89
  Gperf 3.1

NEWS

A considerable number of patches have been made since the last release, see the git log if you want all the gory details.

A huge thank you to everyone who has contributed to this release.

================================================================
Here is a log of the commits since parted 3.2

A. Wilcox (1):
      libparted: Fix endian bug in bsd.c

Alexander Todorov (3):
      tests: Fall back to C.UTF-8 if no en_US.utf8 available
      merge HACKING and README-hacking
      Fwd: [PATCH 2/2] add verbose test documentation

Amarnath Valluri (3):
      UI: Avoid memory leaks.
      libparted: Fix memory leaks
      libparted: Fix possible memory leaks

Arnout Vandecappelle (Essensium/Mind) (1):
      libparted/labels: link with libiconv if needed

Arvin Schnell (1):
      libparted: set swap flag on GPT partitions

Brian C. Lane (73):
      tests: Change minimum size to 256MiB
      tests: Add a test for device-mapper partition sizes
      libparted: device mapper uses 512b sectors
      Update manpage NAME so whatis will work
      doc: Fix url for LWN article
      tests: Make sure the extended partition length is correct (#1135493)
      libparted: BLKPG_RESIZE_PARTITION uses bytes, not sectors (#1135493)
      parted: Fix crash with name command and no disklabel (#1226067)
      libparted: Stop converting . in sys path to /
      libparted: Fix misspelling in hfs exception string
      libparted: Use read only when probing devices on linux (#1245144)
      tests: Use wait_for_dev_to_ functions
      Add libparted-fs-resize.pc
      docs: Add list of filesystems for fs-type (#1311596)
      parted: Display details of partition alignment failure (#726856)
      libparted: Remove fdasd geometry code from alloc_metadata (#1244833)
      libparted: Fix probing AIX disks on other arches
      tests: t3310-flags.sh skip pc98 when sector size != 512
      tests: Add udevadm settle to wait_for_ loop (#1260664)
      tests: Add wait to t9042 (#1257415)
      tests: Fix t1700 failing on a host with a 4k xfs filesystem (#1260664)
      doc: Cleanup mkpart manpage entry (#1183077)
      doc: Add information about quoting
      tests: Set optimal blocks to 64 for scsi_debug devices
      partprobe: Open the device once for probing
      tests: Stop timing t9040 (#1172675)
      tests: Update t0220 and t0280 for the swap flag.
      Increase timeout for rmmod scsi_debug and make it a framework failure
      tests/t1701-rescue-fs wait for the device to appear.
      libparted: Fix udev cookie leak in _dm_resize_partition
      libparted: Fix udev cookie leak in _dm_resize_partition
      atari.c: Drop xlocale.h (#1476934)
      Modify gpt-header-move and msdos-overlap to work with py2 or py3
      Fix the length of several strncpy calls
      parted.c: Always free peek_word
      parted.c: Make sure dev_name is freed
      t6100-mdraid-partitions: Use v0.90 metadata for the test
      Add udf to t1700-probe-fs and to the manpage
      docs: Update GNU License version in parted .text files
      parted: Remove PED_ASSERT from ped_partition_set_name
      Fix align-check help output
      README-release: Updating the release instructions
      configure.ac: Remove default -Werror flag
      Remove unnecessary if before free checks
      Remove trailing whitespace
      Fix syntax-check complaints about tests
      Update syntax-check NEWS hash to cover 3.2 release notes.
      Fix double semi-colons
      Change 'time stamp' to 'timestamp'
      atari.c: Align the AtariRawTable on a 16bit boundary
      dos.c: Fix cast alignment error in maybe_FAT
      Adjust the gcc warnings to recognize FALLTHROUGH
      dvh.c: Use memcpy instead of strncpy
      gpt.c: Align _GPTDiskData to 8 byte boundary
      gpt.c: Drop cast of efi_guid_t to unsigned char *
      sun.c: Aligned _SunRawLabel to 16bit boundary
      Add gcc malloc attribute to ped_alloc and ped_calloc
      bsd.c: Rewrite disklabel code to prevent gcc warnings
      po: Add argmatch.h
      Turn off c_prohibit_gnu_make_extensions
      dist-check.mk: Remove empty .deps directories
      doc: Create po directory if missing
      libparted: Fix bug in bsd.c alpha_bootblock_checksum
      maint: Update to latest gnulib
      maint: Update bootstrap script from latest gnulib
      maint: Bump library REVISION number for release
      maint: Update copyright statements to 2019
      maint: Move NEWS template to line 3
      version 3.2.153
      maint: post-release administrivia
      README-release: Add link to upload registration page
      NEWS: Note the fix for the s390 bug
      version 3.3

Colin Watson (2):
      parted: fix build error on s390
      build: Remove unused traces of dynamic loading

Curtis Gedak (1):
      lib-fs-resize: Fix recognition of FAT file system after resizing

David Cantrell (1):
      Use BLKSSZGET to get device sector size in _device_probe_geometry()

Felix Janda (2):
      libparted/arch/linux.c: Compile without ENABLE_DEVICE_MAPPER
      libparted/fs/xfs/platform_defs.h: Include <fcntl.h> for loff_t

Gareth Randall (1):
      docs: Improve partition description in parted.texi

Gustavo Zacarias (1):
      bug #17883: [PATCH] configure.ac: uclinux is also linux

Hans-Joachim Baader (1):
      Added support for Windows recovery partition (WINRE) on MBR

Heiko Becker (1):
      libparted: also link to UUID_LIBS

John Paul Adrian Glaubitz (2):
      libparted:tests: Move get_sector_size() to common.c
      libparted: Add support for atari partition tables

Laurent Vivier (1):
      libparted: Fix MacOS boot support

Max Staudt (1):
      libparted/fs/amiga/affs.c: Remove printf() to avoid confusion

Michael Small (2):
      Avoid sigsegv in case 2nd nilfs2 superblock magic accidently found.
      Tests case for sigsegv when false nilfs2 superblock detected.

Mike Fleetwood (13):
      lib-fs-resize: Prevent crash resizing FAT16 file systems
      tests: t3000-resize-fs.sh: Add FAT16 resizing test
      tests: t3000-resize-fs.sh: Add requirement on mkfs.vfat
      lib-fs-resize: Prevent crash resizing FAT with very deep directories
      tests: t3000-resize-fs.sh: Add very deep directory
      tests: t3310-flags.sh: Query libparted for all flags to be tested
      tests: t3310-flags.sh: Stop excluding certain flags from being tested
      tests: t3310-flags.sh: Add test for bsd table flags
      libparted: Fix to report success when setting lvm flag on bsd table
      libparted: Remove commented local variable from bsd_partition_set_flag()
      tests: t3310-flags.sh: Add test for mac table flags
      tests: t3310-flags.sh: Add test for dvh table flags
      tests: t3310-flags.sh: Add tests for remaining table types

Niklas Hambüchen (1):
      mkpart: Allow negative start value when FS-TYPE is not given

Pali Rohár (1):
      libparted: Add support for MBR id, GPT GUID and detection of UDF filesystem

Petr Uzel (3):
      Add support for NVMe devices
      libparted: only IEC units are treated as exact
      libparted: Fix starting CHS in protective MBR

Phillip Susi (11):
      maint: post-release administrivia
      parted: don't crash in disk_set when disk label not found
      parted: fix the rescue command
      Add NEWS entry for fat resize fix
      Fix crash when localized
      Fix make check
      tests: fix t6100-mdraid-partitions
      Fix set and disk_set to not crash when no flags are supported
      Fix resizepart iec unit end sector
      Lift 512 byte restriction on fat resize
      Fix atari label false positives

Richard W.M. Jones (1):
      linux: Include <sys/sysmacros.h> for major() macro.

Sebastian Parschauer (3):
      libparted: Don't warn if no HDIO_GET_IDENTITY ioctl
      Add support for RAM drives
      Add support for NVDIMM devices

Sebastian Rasmussen (1):
      libparted: Fix typo in hfs error message

Sergei Antonov (1):
      mac: copy partition type and name correctly

Shin'ichiro Kawasaki (4):
      configure.ac: Check ABI against ABI version 2
      libparted/labels/pt-tools.c: Fix gperf generated function attribute
      include/parted/unit.in.h: Specify const attribute to ped_unit_get_name()
      libparted: Replace abs() with llabs()

Simon Xu (1):
      Fix potential command line buffer overflow

Steven Lang (1):
      Use disk geometry as basis for ext2 sector sizes.

Ulrich Müller (1):
      libparted: Fix ending CHS address in PMBR.

Viktor Mihajlovski (4):
      fdasd: geometry handling updated from upstream s390-tools
      dasd: enhance device probing
      fdasd.c: Safeguard against geometry misprobing
      libparted/s390: Re-enabled virtio-attached DASD heuristics

Wang Dong (13):
      libparted/dasd: correct the offset where the first partition begins
      libparted/dasd: unify vtoc handling for cdl/ldl
      libparted/dasd: update and improve fdasd functions
      libparted/dasd: add new fdasd functions
      libparted/dasd: add test cases for the new fdasd functions
      parted: fix crash due to improper partition number input
      parted: fix wrong error label jump in mkpart
      clean the disk information when commands fail in interactive mode.
      parted: check the name of partition first when to name a partition
      parted/ui: remove unneccesary information of command line
      libpartd/dasd: improve flag processing for DASD-LDL
      libparted/dasd: add an exception for changing DASD-LDL partition table
      libparted/dasd: add test cases for the new fdasd functions

dann frazier (3):
      ped_unit_get_name: Resolve conflicting attributes 'const' and 'pure'
      Fix warnings from GCC 7's -Wimplicit-fallthrough
      Read NVMe model names from sysfs

Categories: FLOSS Project Planets

Plasma Mobile: weekly update: part 2

Planet KDE - Thu, 2019-10-10 20:00

Thanks to the awesome Plasma Mobile community, we are happy to present a second weekly update from Plasma Mobile project.

Shell user interface

Marco Martin made several changes in the shell to improve the overall user experience.

The application grid was updated to show application names in single line and with a smaller font size.

Marco Martin is also working on re-designing the top panel and top drawer, and bugfixes related to that. Below is screenshots of current state:

Both the top and bottom panels were updated to use the normal color scheme instead of the inverted/dark color scheme.

Marco martin also added several fixes 1 and 2 in KWin/wayland for fullscreen windows used by the top drawer and the window switcher.

Kirigami

Nicolas Fella added new API to Kirigami that allows us to make menus in a more traditional style on the desktop.

globalDrawer: Kirigami.GlobalDrawer { isMenu: true actions: [ Kirigami.Action { icon.name: "document-import" text: i18n("Import contacts") onTriggered: { importFileDialog.open() } } ] }

Setting isMenu property to true on the Drawer, should hide the drawer handle when used on the desktop. Instead, a similar looking hamburger button should appear in the toolbar, which behaves appropriately for the desktop.

Applications

Simon Schmeißer added various improvements to the QR-Code scanner application, qrca. It now suppports decoding vcard QR-Codes which include trailing spaces, and features a Kirigami AboutPage. The sheet that appears once a code has been decoded now doesn’t flicker if the code is scanned a few times in a row. Jonah Brüchert ported the app’s pageStack to make use of the new Kirigami PagePool introduced in last weeks blog post, which fixes page stack navigation issues with the About page.

Jonah Brüchert implemented setting a photo for contacts in plasma-phonebook. Nicolas Fella improved the contacts list in the plasma-phonebook simplifying codebase. He also reworked the code for individual contact actions to make them functional and improve the scrolling experience.

Settings applications by default only shows the KCM modules which are suitable for mobile platform, Jonah Brüchert fixed the audio configuration KCM module to add the supported form factors key in desktop file, which makes Audio module visible in the Settings application again. If you are developing a system settings module with Plasma Mobile in mind, don’t forget to add X-KDE-FormFactors key in the metadata.desktop file, e.g.

X-KDE-FormFactors=handset,tablet,desktop

MauiKit file management component now can make use of all the KIO supported protocols, like kdeconnect, applications, recentdocuments, fonts, etc, to browse your file system. This will allow you to seemlessly copy files and folders between various protcols like webdav and sftp. MauiKit has gained a couple of new components, designed to be used as visual delegates for list and grid views, one of those is the new SwipeItemDelegate, which works both on wide and small screen form factors. This delegate can contain quick action buttons which depending on the available space get shown inline when hovering, or underneath, revealing by a swipe gesture.

Index, the file manager, has seen some feature improvements in the selection bar, when selected items are clicked you get a visual preview of the file and on long press the item is removed from the selection bar, making it easy to keep track of what you have selected. You can also mark files as Favorites and browse them easily in a new dedicated Quick section in the sidebar. The Miller column view now auto scrolls to the last column. By making use of the new MauiKit delegate controls, the file and directories on Index, can be dragged on top of each other to perfom actions like copy, move and link and also be dragged out of the app to be open or shared with an external application. Due to usage of KIO framework, Index can now browse various kio slaves like, applications, favorites, webdav, remote, recently used etc.\

vvave, the music player, now has an improved Albums and Artist grid view, and a has gained a lot of small paper cut fixes to be ready for a release soon. If you are interested in helping testing this early packages and report back issues you can join the telegram channel.

Downstream

Bhushan Shah worked on several changes in postmarketOS to make telephony on devices like Pinephone and Librem 5 possible with Plasma Mobile. The upstream change was suggested by Alexander Akulich to not hardcode a telepathy account name in the dialer source code.

We have successfully tested this change on Librem 5 developer kit.

Want to help?

Next time your name could be here! To find out the right task for you, from promotion to core system development, check out Find your way in Plasma Mobile. We are also always happy to welcome new contributors on our public channels. See you there!

Categories: FLOSS Project Planets

Markus Koschany: My Free Software Activities in September 2019

Planet Debian - Thu, 2019-10-10 16:49

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games
  • Reiner Herrmann investigated a build failure of supertuxkart on several architectures and prepared an update to link against libatomic. I reviewed and sponsored the new revision which allowed supertuxkart 1.0 to migrate to testing.
  • Python 3 ports: Reiner also ported bouncy, a game for small kids, to Python3 which I reviewed and uploaded to unstable.
  • Myself upgraded atomix to version 3.34.0 as requested although it is unlikely that you will find a major difference to the previous version.
Debian Java Misc
  • I packaged new upstream releases of ublock-origin and privacybadger, two popular Firefox/Chromium addons and
  • packaged a new upstream release of wabt, the WebAssembly Binary Toolkit.
Debian LTS

This was my 43. month as a paid contributor and I have been paid to work 23,75 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 11.09.2019 until 15.09.2019 I was in charge of our LTS frontdesk. I investigated and triaged CVE in libonig, bird, curl, openssl, wpa, httpie, asterisk, wireshark and libsixel.
  • DLA-1922-1. Issued a security update for wpa fixing 1 CVE.
  • DLA-1932-1. Issued a security update for openssl fixing 2 CVE.
  • DLA-1900-2. Issued a regression update for apache fixing 1 CVE.
  • DLA-1943-1. Issued a security update for jackson-databind fixing 4 CVE.
  • DLA-1954-1. Issued a security update for lucene-solr fixing 1 CVE. I triaged CVE-2019-12401 and marked Jessie as not-affected because we use the system libraries of woodstox in Debian.
  • DLA-1955-1. Issued a security update for tcpdump fixing 24 CVE by backporting the latest upstream release to Jessie. I discovered several test failures but after more investigation I came to the conclusion that the test cases were simply created with a newer version of libpcap which causes the test failures with Jessie’s older version. DLA-1955-1 will be available shortly.
ELTS

Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 7 “Wheezy”. This was my sixteenth month and I have been assigned to work 15 hours on ELTS plus five hours from August. I used 15 of them for the following:

  • I was in charge of our ELTS frontdesk from 30.09.2019 until 06.10.2019 and I triaged CVE in tcpdump. There were no reports of other security vulnerabilities for supported packages in this week.
  • ELA-163-1. Issued a security update for curl fixing 1 CVE.
  • ELA-171-1. Issued a security update for openssl fixing 2 CVE.
  • ELA-172-1. Issued a security update for linux fixing 23 CVE.
  • ELA-174-1. Issued a security update for tcpdump fixing 24 CVE. ELA-174-1 will be available shortly.

Categories: FLOSS Project Planets

FSF Blogs: IDAD 2019: Join us on October 12th, and use this special dust jacket to uphold the right to read

GNU Planet! - Thu, 2019-10-10 16:40

Each year we stage the International Day Against DRM (IDAD) to help others learn about the dangers of Digital Restrictions Management (DRM). For this year's IDAD on October 12th, we are focusing in particular on the increasing and disturbing amount of DRM present in ebooks and other online educational materials. Having so thoroughly invaded our leisure time, the digital infection known as DRM should not be allowed to spread into the classroom. Joining us in the fight for IDAD 2019 are the Electronic Frontier Foundation, Creative Commons, and The Document Foundation, among ten other participating organizations we are privileged to have standing with us in the fight against DRM.

In a bid to become the "Netflix of textbooks," and like many other publishers, Pearson is doing the opposite of what anyone committed to education should do: severely restricting a student's access to the materials they need for their courses through arbitrary page limits, "rented" books that disappear, and many which require a constant Internet connection.

Publishers like Pearson should not be allowed to decide the rigidly specific conditions under which a student can learn. No book should spy on your reading habits or simply "disappear" after you have had it for too long. In the digital age, it is unacceptable for a publisher to impose the same principles of scarcity that would apply to a physical product to a digital file. The computing revolution was caused by files being shared, not merely rented. Imposing these limitations on digital media is an attack on user freedom, no matter how much corporate PR may spin the story. It's our aim to let the world know that we support the rights of readers. You could say that for IDAD 2019, Defective by Design has you covered.

We have developed a dust jacket you can slip over any "dead tree" book that you are reading to warn others about the looming threat of DRM. Whether in school, in a coffee shop, or on the subway, it is an easy conversation starter about the insidious nature of DRM. We encourage all readers to use them, whether on the latest hardcover bestseller or the textbook you use in class (while you still have one).

Defective by Design will be printing high quality versions of the dust jacket for every book shipped from our friends at the GNU Press while supplies permit. And true to our mission, we are also releasing the source files to these designs so that others may do the same. They are fully editable and shareable in Scribus v1.5+, so feel free to print, share, translate, and give away your own printed copies to readers and anti-DRM activists in your area.

Using ebooks for educational purposes is far from a bad thing: in fact, we will be bringing together the global Defective by Design community to help improve the fully shareable and editable works like those published by our friends at FLOSS Manuals. We're excited to be promoting an opposition to "locked-down" learning by staging a global hackathon on free culture works in the #dbd channel on Freenode, or our own in-person meeting to help edit these ethical alternatives at our offices in Boston.

Activists all over the world come together on the International Day Against DRM to resist Digital Restrictions Management's massive and aggressive encroachment on our real digital rights.

This year, we're confident that we can show major book publishers like Pearson that putting a lock on learning is unacceptable. Join us on October 12th and beyond in our double-fronted attack to tell others about the evils of DRM, and to eliminate unethical digital publishing by contributing to free and ethical alternatives.

Spread the message
  • Print and share these covers as widely as you can, leaving them as freebies in libraries, coffee shops, and wherever books are appreciated. Snap a photo of your campaigning in action, and share it to social media with the tags #idad, #dbd, or #DefectivebyDesign.

  • To help us coordinate year-round actions against DRM, join the DRM Elimination Crew mailing list.

  • If you would like to translate the dust jacket into your language, please email campaigns@fsf.org and we will be happy to include it on the official Defective by Design site. We're currently offering them in English, Spanish, and German.

IDAD actions
Categories: FLOSS Project Planets

IDAD 2019: Join us on October 12th, and use this special dust jacket to uphold the right to read

FSF Blogs - Thu, 2019-10-10 16:40

Each year we stage the International Day Against DRM (IDAD) to help others learn about the dangers of Digital Restrictions Management (DRM). For this year's IDAD on October 12th, we are focusing in particular on the increasing and disturbing amount of DRM present in ebooks and other online educational materials. Having so thoroughly invaded our leisure time, the digital infection known as DRM should not be allowed to spread into the classroom. Joining us in the fight for IDAD 2019 are the Electronic Frontier Foundation, Creative Commons, and The Document Foundation, among ten other participating organizations we are privileged to have standing with us in the fight against DRM.

In a bid to become the "Netflix of textbooks," and like many other publishers, Pearson is doing the opposite of what anyone committed to education should do: severely restricting a student's access to the materials they need for their courses through arbitrary page limits, "rented" books that disappear, and many which require a constant Internet connection.

Publishers like Pearson should not be allowed to decide the rigidly specific conditions under which a student can learn. No book should spy on your reading habits or simply "disappear" after you have had it for too long. In the digital age, it is unacceptable for a publisher to impose the same principles of scarcity that would apply to a physical product to a digital file. The computing revolution was caused by files being shared, not merely rented. Imposing these limitations on digital media is an attack on user freedom, no matter how much corporate PR may spin the story. It's our aim to let the world know that we support the rights of readers. You could say that for IDAD 2019, Defective by Design has you covered.

We have developed a dust jacket you can slip over any "dead tree" book that you are reading to warn others about the looming threat of DRM. Whether in school, in a coffee shop, or on the subway, it is an easy conversation starter about the insidious nature of DRM. We encourage all readers to use them, whether on the latest hardcover bestseller or the textbook you use in class (while you still have one).

Defective by Design will be printing high quality versions of the dust jacket for every book shipped from our friends at the GNU Press while supplies permit. And true to our mission, we are also releasing the source files to these designs so that others may do the same. They are fully editable and shareable in Scribus v1.5+, so feel free to print, share, translate, and give away your own printed copies to readers and anti-DRM activists in your area.

Using ebooks for educational purposes is far from a bad thing: in fact, we will be bringing together the global Defective by Design community to help improve the fully shareable and editable works like those published by our friends at FLOSS Manuals. We're excited to be promoting an opposition to "locked-down" learning by staging a global hackathon on free culture works in the #dbd channel on Freenode, or our own in-person meeting to help edit these ethical alternatives at our offices in Boston.

Activists all over the world come together on the International Day Against DRM to resist Digital Restrictions Management's massive and aggressive encroachment on our real digital rights.

This year, we're confident that we can show major book publishers like Pearson that putting a lock on learning is unacceptable. Join us on October 12th and beyond in our double-fronted attack to tell others about the evils of DRM, and to eliminate unethical digital publishing by contributing to free and ethical alternatives.

Spread the message
  • Print and share these covers as widely as you can, leaving them as freebies in libraries, coffee shops, and wherever books are appreciated. Snap a photo of your campaigning in action, and share it to social media with the tags #idad, #dbd, or #DefectivebyDesign.

  • To help us coordinate year-round actions against DRM, join the DRM Elimination Crew mailing list.

  • If you would like to translate the dust jacket into your language, please email campaigns@fsf.org and we will be happy to include it on the official Defective by Design site. We're currently offering them in English, Spanish, and German.

IDAD actions
Categories: FLOSS Project Planets

Dataquest: How to Analyze Survey Data with Python for Beginners

Planet Python - Thu, 2019-10-10 16:31

Learn to analyze and filter survey data, including multi-answer multiple choice questions, using Python in this beginner tutorial for non-coders!

The post How to Analyze Survey Data with Python for Beginners appeared first on Dataquest.

Categories: FLOSS Project Planets

Continuum Analytics Blog: How to Restore Anaconda after Update to MacOS Catalina

Planet Python - Thu, 2019-10-10 16:30

MacOS Catalina was released on October 7, 2019, and has been causing quite a stir for Anaconda users.  Apple has decided that Anaconda’s default install location in the root folder is not allowed. It moves…

The post How to Restore Anaconda after Update to MacOS Catalina appeared first on Anaconda.

Categories: FLOSS Project Planets

PyCharm: 2019.3 EAP 5

Planet Python - Thu, 2019-10-10 14:33

A new version of the Early Access Program (EAP) for PyCharm 2019.3 is available now! Download it from our website.

New for this version Toggle between relative and absolute imports

PyCharm now has the ability to add imports using relative and absolute within a source root. Use intention actions to convert absolute imports into relative and relative imports into absolute.

The usage of relative paths is also available through the import assistant. You can add relative imports when fixing missing imports within your current source root.

Select testing framework for all New Projects

PyCharm now allows you to preselect a default test runner for newly created projects. To configure this go to File | Settings for New Projects

 for Windows and Linux, or File | Preferences for New Projects for macOS. Select Tools | Python Integrated Tools and under the Testing section choose your desired test runner. MongoDB support is here!

We are excited to announce that we now have initial support for MongoDB. Available already in this EAP, use functionality such as observing collections and fields in the database explorer, using the data viewer to explore your data, and performing queries in the query console.

If you wish to explore more about this feature click here.

Improved Github experience

The Get from Version Control dialog was improved. There’s now an option available for Github projects specifically to select repositories where you can scroll through a list of available repositories in your account.

Another improvement that you will be able to use is the Github Pull Request window (accessible through VCS | Git | View Pull Requests) which shows you the list of all the pull requests in the project you’re working with, their status and changed files. In addition, if you feel curious to see additional data regarding a pull request, double-click on the pull request you wish to explore and get comments, information about reviews and more.

Further improvements
  • We fixed an issue caused by packages installed as editable that lead to unresolved references.
  • The stub packages usage experience was improved:
    • The incompatible suggestions between stub packages and runtime packages are no longer an issue.
    • PyCharm will now suggest you newer versions if available.
  • For more details on what’s new in this version, see the release notes
Interested?

Download this EAP from our website. Alternatively, you can use the JetBrains Toolbox App to stay up to date throughout the entire EAP.

If you’re on Ubuntu 16.04 or later, you can use snap to get PyCharm EAP, and stay up to date. You can find the installation instructions on our website.

Categories: FLOSS Project Planets

DrupalCon News: Making DrupalCon Environmentally Sustainable

Planet Drupal - Thu, 2019-10-10 14:20

A proper Mojito starts with a bar spoon of sugar and 2-3 slices of lime in the bottom of a shaker. Top that with 14-16 fresh mint leaves, and muddle them gently into the padding to release the delicate oils. Be careful not to bruise the leaves or they will release chlorophyll, which will give us a bitter drink. Fill the glass with ice, 1.5 oz white rum, 1/2 cup club soda, and stir gently. Voila, a perfect blend of refreshment for you.

Categories: FLOSS Project Planets

Hook 42: Ride into the Danger Zone: How to Update Drupal 8 Field Settings without Losing any Data

Planet Drupal - Thu, 2019-10-10 13:15
Ride into the Danger Zone: How to Update Drupal 8 Field Settings without Losing any Data Michael Wojcik Thu, 10/10/2019 - 17:15
Categories: FLOSS Project Planets

Drupal Association blog: Drupal Core Beta Testing Program call for the upcoming Drupal 8.8.0 release

Planet Drupal - Thu, 2019-10-10 12:30

As announced in December 2018, the Drupal Association assists the Drupal project by coordinating a beta testing program for minor releases of Drupal core.

Agencies and other organizations who are supporting ambitious Drupal 8 sites are invited to be part of the beta testing program. This means that, when a beta release is about to be made, we can help core maintainers work with organizations on the Beta Testing Program to install the beta core release on real-world examples of Drupal websites, in their staging environments. The beta testers can then feedback to the core maintainers any issues they see running the beta core release in a structured way.

Being part of the Beta Testing Program is a key contribution to the Drupal project and also helps organizations to be very aware of any changes relevant to their supported websites.

Would your organization, and the Drupal project, benefit from participating in the Beta Testing Program? You can apply to join:

Apply to participate in the program

Who should apply?

Agencies and site owners who maintain large and complex Drupal 8 production sites. In particular, sites that use a wide range of contributed and custom modules or have large volumes of content.

Categories: FLOSS Project Planets

Mediacurrent: Migrating Apigee Developer Portals to Drupal 8

Planet Drupal - Thu, 2019-10-10 11:59

For several years, Google has leveraged Drupal as the primary tool for developer portals built for its popular Apigee Edge API platform. With the introduction of the production-ready Drupal 8 distribution in May 2019, an announcement was made that support for the D7 platform would expire in 12 months. Concurrent with that announcement we know that D7 end-of-life will occur in November of 2021. This means that many Apigee portals will need to make the move to Drupal 8 or Apigee’s integrated portals in the near future.

In this article, we will walk through the steps to migrate Apigee portals from Drupal 7 to 8. The first decision you will need to make is whether to upgrade your existing custom build or move to the new Drupal 8 kickstart distribution. To help guide this decision, let’s first take a look at what the Apigee distribution for Drupal 8 has to offer and why you would want to leverage this platform.


Apigee Developer Portal Kickstart (D8)

The Apigee documentation site has excellent instructions on how to set up a developer portal using their Drupal 8 distribution. We will take a quick look at the features that come with the newest install profile.

Apigee Kickstart Homepage screenshot

 

The Apigee distribution once again has a nice out-of-box experience. This time around the base theme leverages a Bootstrap base theme that makes it easy to brand and customize your site.

The content types you see will be familiar: Article, Basic page, FAQ, Forums, and a new Landing page content type. Video, images, and audio are now more appropriately Media types in Drupal 8. The SmartDocs content type is gone in favor of a new API Doc type that supports the OpenAPI format (see below).

API doc screenshot


Adding content is now more flexible in Drupal 8 with the implementation of Paragraph types. Paragraphs allow you to add different components onto the page in any order you like. See the homepage example below.
 

Apigee Paragraphs screenshot from Homepage

 

In Drupal 8, Apigee also added some new block types. Blocks are still useful for components that need to live on more than one page.

 

Apigee block types screenshot
 

The great thing about Apigee’s distribution is that it also includes sample content which makes getting set up a breeze. 

For organizations setting up a portal for the first time, leveraging this distribution is the way to go. For portals being upgraded from Drupal 7, this is more of a judgment call. If your portal has been heavily customized it might be better to move forward with a traditional Drupal 8 upgrade which we will cover under Custom Migrations. If, however, your organization’s portal previously had taken advantage of out-of-box functionality, then it makes sense to migrate content to Apigee’s D8 project which we will walk through next.
 

Migrating to Apigee Kickstart D8

The maintainers of the Apigee kickstart distribution have supplied a module to make migrations as painless as possible. The apigee_kickstart_migrate sub-module provides the Migrate module configuration that maps Drupal 7 content to their newer Drupal 8 counterparts. Again, this is most helpful for portals that did not heavily customize content in Drupal 7. Included in this sub-module are instructions on how to run the migrations and how to extend migrations with custom fields.

The following table shows how content is mapped from the Drupal 7 portal to Drupal 8.

 

Drupal 7 (Devportal)

Drupal 8 (Kickstart)

Content Types

Article (article)

Article (article)

 

title

title

 

body

body

 

field_keywords

field_tags

 

field_content_tag

field_tags

 

 

 

 

Basic page (page)

Basic page (page)

 

title

title

 

body

body

 

 

 

 

FAQ (faq)

FAQ (faq)

 

title

title

 

body

field_answer

 

field_detailed_question

-

 

 

 

 

Forum topic (forum)

Forum topic (forum)

 

title

title

 

body

body

 

taxonomy_forums

taxonomy_forums

 

 

 

Comment Types 

Comment (comment)

Comment (comment)

 

author

author

 

subject

subject

 

comment_body

comment_body

 

 

 

Taxonomy

Forums (forums)

Forum (forums)

 

name

name

 

Tags (tags)

Tags (tags)

 

name

name


Custom migrations

When would you go with a custom Drupal 8 upgrade over leveraging the Kickstart project? 

Where you run into trouble with distributions in Drupal is when you lean on so many customizations that the distribution gets in the way more than it saves time. In those instances, it’s better to stick with your own custom implementation.

The Mediacurrent team recently released the Migrate Pack module to make things easier for developers. This module has been tested against several sites and distributions including the Apigee Drupal 7 install profile.

The approach here would be to install Migrate Pack and the two additional Apigee modules in lieu of leveraging the distribution. The two key Apigee features you will need are the Apigee API Catalog and Apigee Edge modules. All of these projects should be installed using Composer.

If your theme was built custom in Drupal 7, then it will need to be manually ported to Drupal 8’s Twig-based theme engine. The other option is to instead borrow the Bootstrap-based theme included with Apigee’s distribution. It should be said that if the latter approach is taken, it might be better to migrate everything to the new Kickstarter rather than cherry picking the theme.

Next Steps

Apigee has very good support and documentation to get you started on moving to Drupal 8. For issues and bugs specific to the Drupal distribution, the Github project issue queue is the best place to look. The Migrate Pack module also has its own issue queue on Drupal.org should you run into problems.

Mediacurrent has logged over 100,000 hours in Drupal 8 development, many of which are Drupal 7 to 8 upgrades. We would love to work with you on your next project. 

Please visit our contact page to get in touch or hit me up on Twitter to talk more. We also have comments below to gather your feedback and questions.

table, th, td { border: 1px solid black; }
Categories: FLOSS Project Planets

Pages