Feeds

Hideki Yamane: PoC: use Sphinx for debian-policy

Planet Debian - Mon, 2017-06-19 09:09
Before party, we did a monthly study meeting and I gave a talk about tiny hack for debian-policy document.
debian-policy was converted from debian-sgml to docbook in 4.0.0, and my proposal is "Go move forward to Sphinx".

Here's sample, and you can also get PoC source from my GitHub repo and check it.
Categories: FLOSS Project Planets

Brooklyn 0.1 is out there: full Telegram and IRC support

Planet KDE - Mon, 2017-06-19 09:02


I'm glad to announce that a first stable version of Brooklyn is released!
What's new? Well:

  • Telegram and IRC APIs are fully supported;
  •  it manages attachments (even Telegram's video notes), also on text-only protocols through a web server;
  • it has an anti-flood feature on IRC (e.g. it doesn't notify other channels if an user logs out without writing any message). For this I've to say "thank you" to Cristian Baldi, a W2L developer which has this fabulous idea;
  • it provides support for edited messages;
  • SASL login mechanism is implemented;
  • map locations are supported through OpenStreetMap
  • you can see a list of other channels' members typing "botName users" on IRC or using "/users" command on Telegram;
  • if someone writes a private message to the bot instead of in a public channel, it sends him the license message "This software is released under the GNU AGPL license. https://phabricator.kde.org/source/brooklyn/";
As you may have already noticed, after talking with my mentor I decided to modify the GSOC timeline.We decided to wait until Rocket.Chat REST APIs will be more stable and in the meantime to provide a full-working IRC/Telegram bridge.
This helped me providing a more stable and useful software for the first evaluation.
We are also considering writing a custom wrapper for the REST APIs because current solutions don't fits our needs.

The last post reached over 600 people and that's awesome!
As always I will appreciate every single suggestion.
Have you tried the application? Do you have any plans to do so? Tell me everything in the comments section down below!

Categories: FLOSS Project Planets

Doug Hellmann: time — Clock Time — PyMOTW 3

Planet Python - Mon, 2017-06-19 09:00
The time module provides access to several different types of clocks, each useful for different purposes. The standard system calls like time() report the system “wall clock” time. The monotonic() clock can be used to measure elapsed time in a long-running process because it is guaranteed never to move backwards, even if the system time … Continue reading time — Clock Time — PyMOTW 3
Categories: FLOSS Project Planets

Mike Driscoll: PyDev of the Week: Aileen Nielsen

Planet Python - Mon, 2017-06-19 08:30

This week we welcome Aileen Nielsen as our PyDev of the Week. Aileen has been using Python in the data science field for a while now. She recently gave a tutorial on Time Series Analysis at PyCon 2017 and she also did a talk on NoSQL Python at PyData Amsterdam 2016. Let’s take a few moments to learn more about our fellow developer!

Can you tell us a little about yourself (hobbies, education, etc):

I’m a software engineer at One Drop, a diabetes management platform. We’re trying to help people better understand and manage their chronic conditions with the use of technology, data analysis, and expert coaching.

I spent a lot of time in school (law school, ABD in physics grad school), so I consider myself an eclectic person as far as academic interests, and I like to read non-fiction in lots of area. Right now I’m most interested in non-fiction books about spying and organized crime. My hobbies are traveling and hiking. When I’m not working, I try not to be in front of a screen.

Why did you start using Python?

I ‘grew up’ with R and some proprietary data analysis software packages I used in physics grad school (Igor, Matlab). However, I was frustrated with proprietary software solutions because they’re not portable and not so well discussed on forums like Stack Overflow.

Over time I got more drawn into Python because the documentation is user friendly and well presented. Also, the Python community is so active and welcoming both online and in person that it’s easy to get started. I like that Python has such an eclectic base of industry, academic, and hobbyist users.

What other programming languages do you know and which is your favorite?

I have done most other work in R, C++, and Objective-C. Apart from Python, my favorite programming language is C++ because the syntax is so precise but also so wonderfully complicated.

What projects are you working on now?

I like to give talks about Python and how it relates to my work. At the moment, I’m putting together a four-part tutorial on Machine Learning for Healthcare in Python and R.

Which Python libraries are your favorite (core or 3rd party)?

The obligatory answer in my line of work is pandas, numpy, and scipy. That said, I’ve spent a lot of quality personal time with scrapy.

Otherwise I tend to go looking for really specific algorithms not covered in scipy and related packages. In that case, I often find github is my best friend and that most algorithms I look for have several well implemented gists I can use to see examples of what I want to do.

Is there anything else you’d like to say?

I’ve gravitated towards the Python community because it is so open, welcoming, and active. Apart from its beautiful syntax, Python offers lots of opportunities to get to know interesting people working on fantastically interesting problems. I believe Python will keep growing so long as this continues to be the case.

Thanks for doing the interview!

Categories: FLOSS Project Planets

Sergey Beryozkin: SwaggerUI in CXF or what Child's Play really means

Planet Apache - Mon, 2017-06-19 07:07
We've had an extensive demonstration of how to enable Swagger UI for CXF endpoints returning Swagger documents for a while but the only 'problem' was that our demos only showed how to unpack a SwaggerUI module into a local folder with the help of a Maven plugin and make these unpacked resources available to browsers.
It was not immediately obvious to the users how to activate SwaggerUI and with the news coming from a SpringBoot land that apparently it is really easy over there to do it it was time to look at making it easier for CXF users.
So Aki, Andriy and myself talked and this is what CXF 3.1.7 users have to do:

1. Have Swagger2Feature activated to get Swagger JSON returned
2. Add a swagger-ui dependency  to the runtime classpath.
3. Access Swagger UI

For example, run a description_swagger2 demo. After starting a server go to the CXF Services page and you will see:


Click on the link and see a familiar Swagger UI page showing your endpoint's API.

Have you wondered what do some developers mean when they say it is a child's play to try whatever they have done ? You'll find it hard to find a better example of it after trying Swagger UI with CXF 3.1.7 :-)

Note in CXF 3.1.8-SNAPSHOT we have already fixed it to work for Blueprint endpoints in OSGI (with the help from Łukasz Dywicki).  SwaggerUI auto-linking code has also been improved to support some older browsers better.

Besides, CXF 3.1.8 will also offer a proper support for Swagger correctly representing multiple JAX-RS endpoints based on the fix contributed by Andriy and available in Swagger 1.5.10 or when API interface and implementations are available in separate (OSGI) bundles (Łukasz figured out how to make it work).

Before I finish let me return to the description_swagger2 demo. Add a cxf-rt-rs-service-description dependency to pom.xml. Start the server and check the services page:


Of course some users do and will continue working with XML-based services and WADL is the best language available around to describe such services. If you click on a WADL link you will see an XML document returned. WADLGenerator can be configured with an XSLT template reference and if you have a good template you can get UI as good as this Apache Syncope document.

Whatever your data representation preferences are, CXF will get you supported.

 




Categories: FLOSS Project Planets

Calamares Testing

Planet KDE - Mon, 2017-06-19 03:29

My project for Blue Systems is maintaining Calamares, the distro-independent installer framework. Not surprisingly, working on it means installing lots of Linux distro’s. Here’s my physical-hardware testing setup, which is two identical older HP desktop machines and a stack of physical DVDs. Very old-school. Often I use Virtual Box, but sometimes the hum of a DVD is just what I need to calm down. There’s a KDE Neon, a Manjaro and a Netrunner DVD there, but the machine labeled Ubuntu is running Kannolo and sporting an openSUSE Geeko.

I’m all for eclecticism.

So far, I’ve found one new bug in Calamares, and fixed a handfull of them. I’m thankful to Teo, the previous Calamares maintainer, for providing helpful historical information, and to the downstream users (e.g. the distros) for being cheerful in explaining their needs.

Installing a bunch of different modern Linuxen is kind of neat; the variations in KDE Plasma Desktop configuration and branding are wild. Nearly all of them have trouble being usable on small screen sizes (e.g. the 800×600 that Virtual Box starts with — this has since been fixed). They all seem to install Virtual Box guest additions and can handle resizes immediately, so it’s not a huge issue, but just annoying. I’ve only broken one of my Linux installs so far (running an update, which then crashed kscreenlocker, and now it just comes up a black screen). I’ve got a KDE Neon dev/unstable as my main development VM set up, with KDevelop and the whole shizzle .. it’s very nice inside my KDE 4 desktop on FreeBSD.

I’ve got two favorite features, so far, in Linux live CDs and in KDE Plasma installations: ejecting the live CD on shutdown (Neon does this) and skipping the confirmation screen + 30 second timeout when clicking logout or shutdown (Netrunner does this).

So, time to hunker down with the list of issues, and in the meantime: keep on installin’.

Categories: FLOSS Project Planets

Kushal Das: dgplug summer training 2017 is on

Planet Python - Mon, 2017-06-19 02:38

Yesterday evening we started the 10th edition of dgplug summer training program. We around 70 active participants in the session, there were a few people who informed us beforehand that they will not be available during the first session. We also knew that at the same time we had India-vs-Pakistan cricket match, that means many Indian participants will be missing the day one (though it seems the Indian cricket team tried their level best to make sure that participants stop watching the match :D ).

We started with the usual process, Sayan and /me explained the different rules related to the sessions, and also about IRC. The IRC channel #dgplug is not only a place to discuss technical things, but also to discuss about everyday things between many of the dgplug members. We ask the participants to stay online as long as possible in the initial days and ask as many questions as they want. Asking questions is a very important part of these sessions, as many are scared to do so in public.

We also had our regular members in the channel during the session, and after the session ended, we got into other discussions as usual.

One thing I noticed was the high number of students participating from the Zakir Hussain College Of Engineering, Aligarh Muslim University, Aligarh, India. When I asked how come so many of you are here, they said the credit goes to cran-cg (Chiranjeev Gupta) who motivated the first year students to take part in the session. Thank you cran-cg for not only taking part but also building a local group of Free Software users/developers. We also have Nisha, who is a fresh economics graduate, taking part in this year’s program.

As usual the day one was on Sunday, but from now on all the sessions will be on weekdays only unless it is a special guest session where a weekend is a better for our guest. Our next session is at 13:30PM UTC today, at the #dgplug channel on Freenode server. If you want to help, just be there :)

Categories: FLOSS Project Planets

Drupal core announcements: Drupal core security release window on Wednesday, June 21, 2017

Planet Drupal - Mon, 2017-06-19 00:32
Start:  2017-06-21 12:00 America/New_York Organizers:  xjm Event type:  Online meeting (eg. IRC meeting)

The monthly security release window for Drupal 8 and 7 core will take place on Wednesday, June 21.

This does not mean that a Drupal core security release will necessarily take place on that date for any of the Drupal 8 or 7 branches, only that you should watch for one (and be ready to update your Drupal sites in the event that the Drupal security team decides to make a release).

There will be no bug fix or stable feature release on this date. The next window for a Drupal core patch (bug fix) release for all branches is Wednesday, July 05. The next scheduled minor (feature) release for Drupal 8 will be on Wednesday, October 5.

For more information on Drupal core release windows, see the documentation on release timing and security releases, and the discussion that led to this policy being implemented.

Categories: FLOSS Project Planets

Michal Čihař: Call for Weblate translations

Planet Debian - Mon, 2017-06-19 00:00

Weblate 2.15 is almost ready (I expect no further code changes), so it's really great time to contribute to it's translations! Weblate 2.15 should be released early next week.

As you might expect, Weblate is translated using Weblate, so the contributions should be really easy. In case there is something unclear, you can look into Weblate documentation.

I'd especially like to see improvements in the Italian translation which was one of the first in Weblate beginnings, but hasn't received much love in past years.

Filed under: Debian English SUSE Weblate

Categories: FLOSS Project Planets

First blog post

Planet KDE - Sun, 2017-06-18 22:49

This is my first blog post. It’s a great opportunity to start documenting my journey as a software engineer with my GSoC project with digiKam as a part of KDE this summer.


Categories: FLOSS Project Planets

Hynek Schlawack: Why Your Dockerized Application Isn’t Receiving Signals

Planet Python - Sun, 2017-06-18 20:00

Proper cleanup when terminating your application isn’t less important when it’s running inside of a Docker container. Although it only comes down to making sure signals reach your application and handling them, there’s a bunch of things that can go wrong.

Categories: FLOSS Project Planets

Aaron Morton: Reaper 0.6.1 released

Planet Apache - Sun, 2017-06-18 20:00

Since we created our hard fork of Spotify’s great repair tool, Reaper, we’ve been committed to make it the “de facto” community tool to manage repairing Apache Cassandra clusters.
This required Reaper to support all versions of Apache Cassandra (starting from 1.2) and some features it lacked like incremental repair.
Another thing we really wanted to bring in was to remove the dependency on a Postgres database to store Reaper data. As Apache Cassandra users, it felt natural to store these in our favorite database.

Reaper 0.6.1

We are happy to announce the release of Reaper 0.6.1.

Apache Cassandra as a backend storage for Reaper was introduced in 0.4.0, but it appeared that it was creating a high load on the cluster hosting its data.
While the Postgres backend could rely on indexes to search efficiently for segments to process, the C* backend had to scan all segments and filter afterwards. The initial data model didn’t account for the frequency of those scans, which generated a lot of requests per seconds once you had repairs with hundreds (if not thousands) of segments.
Then it seems, Reaper was designed to work on clusters that do not use vnodes. Computing the number of possible parallel segment repairs for a job used the number of tokens divided by the replication factor, instead of using the number of nodes divided by the replication factor.
This lead to create a lot of overhead with threads trying and failing to repair segments because the nodes were already involved in a repair operation, each attempt generating a full scan of all segments.

Both issues are fixed in Reaper 0.6.1 with a brand new data model which requires a single query to get all segments for a run, the use of timeuuids instead of long ids in order to avoid lightweight transactions when generating repair/segment ids and a fixed computation of the number of possible parallel repairs.

The following graph shows the differences before and after the fix, observed on a 3 nodes cluster using 32 vnodes :

The load on the nodes is now comparable to running Reaper with the memory backend :

This release makes Apache Cassandra a first class citizen as a Reaper backend!

Upcoming features with the Apache Cassandra backend

On top of not having to administer yet another kind of database on top of Apache Cassandra to run Reaper, we can now better integrate with multi region clusters and handle security concerns related to JMX access.

First, the Apache Cassandra backend allows us to start several instances of Reaper instead of one, bringing it fault tolerance. Instances will share the work on segments using lightweight transactions and metrics will be stored in the database. On multi region clusters, where the JMX port is closed in cross DC communications, it will give the opportunity to start one or more instances of Reaper in each region. They will coordinate together through the backend and Reaper will still be able to apply backpressure mechanisms, by monitoring the whole cluster for running repairs and pending compactions.

Next, comes the “local mode”, for companies that apply strict security policies for the JMX port and forbid all remote access. In this specific case, a new parameter was added in the configuration yaml file to activate the local mode and you will need to start one instance of Reaper on each C* node. Each instance will then only communicate with the local node on 127.0.0.1 and ignore all tokens for which this node isn’t a replica.

Those feature are both available in a feature branch that will be merged before the next release.

While the fault tolerant features have been tested in different scenarios and considered ready for use, the local mode still needs a little bit of work before usage on real clusters.

Improving the frontend too

So far, we hadn’t touched the frontend and focused on the backend.
Now we are giving some love to the UI as well. On top of making it more usable and good looking, we are pushing some new features that will make Reaper “not just a tool for managing repairs”.

The first significant addition is the new cluster health view on the home screen :

One quick look at this screen will give you the nodes individual status (up/down) and the size on disk for each node, rack and datacenter of the clusters Reaper is connected to.

Then we’ve reorganized the other screens, making forms and lists collapsible, and adding a bit of color :

All those UI changes were just merged into master for your testing pleasure, so feel free to build, deploy and be sure to give us feedback on the reaper mailing list!

Categories: FLOSS Project Planets

Simon Josefsson: OpenPGP smartcard under GNOME on Debian 9.0 Stretch

Planet Debian - Sun, 2017-06-18 18:42

I installed Debian 9.0 “Stretch” on my Lenovo X201 laptop today. Installation went smooth, as usual. GnuPG/SSH with an OpenPGP smartcard — I use a YubiKey NEO — does not work out of the box with GNOME though. I wrote about how to fix OpenPGP smartcards under GNOME with Debian 8.0 “Jessie” earlier, and I thought I’d do a similar blog post for Debian 9.0 “Stretch”. The situation is slightly different than before (e.g., GnuPG works better but SSH doesn’t) so there is some progress. May I hope that Debian 10.0 “Buster” gets this right? Pointers to which package in Debian should have a bug report tracking this issue is welcome (or a pointer to an existing bug report).

After first login, I attempt to use gpg --card-status to check if GnuPG can talk to the smartcard.

jas@latte:~$ gpg --card-status gpg: error getting version from 'scdaemon': No SmartCard daemon gpg: OpenPGP card not available: No SmartCard daemon jas@latte:~$

This fails because scdaemon is not installed. Isn’t a smartcard common enough so that this should be installed by default on a GNOME Desktop Debian installation? Anyway, install it as follows.

root@latte:~# apt-get install scdaemon

Then try again.

jas@latte:~$ gpg --card-status gpg: selecting openpgp failed: No such device gpg: OpenPGP card not available: No such device jas@latte:~$

I believe scdaemon here attempts to use its internal CCID implementation, and I do not know why it does not work. At this point I often recall that want pcscd installed since I work with smartcards in general.

root@latte:~# apt-get install pcscd

Now gpg --card-status works!

jas@latte:~$ gpg --card-status Reader ...........: Yubico Yubikey NEO CCID 00 00 Application ID ...: D2760001240102000006017403230000 Version ..........: 2.0 Manufacturer .....: Yubico Serial number ....: 01740323 Name of cardholder: Simon Josefsson Language prefs ...: sv Sex ..............: male URL of public key : https://josefsson.org/54265e8c.txt Login data .......: jas Signature PIN ....: not forced Key attributes ...: rsa2048 rsa2048 rsa2048 Max. PIN lengths .: 127 127 127 PIN retry counter : 3 3 3 Signature counter : 8358 Signature key ....: 9941 5CE1 905D 0E55 A9F8 8026 860B 7FBB 32F8 119D created ....: 2014-06-22 19:19:04 Encryption key....: DC9F 9B7D 8831 692A A852 D95B 9535 162A 78EC D86B created ....: 2014-06-22 19:19:20 Authentication key: 2E08 856F 4B22 2148 A40A 3E45 AF66 08D7 36BA 8F9B created ....: 2014-06-22 19:19:41 General key info..: sub rsa2048/860B7FBB32F8119D 2014-06-22 Simon Josefsson sec# rsa3744/0664A76954265E8C created: 2014-06-22 expires: 2017-09-04 ssb> rsa2048/860B7FBB32F8119D created: 2014-06-22 expires: 2017-09-04 card-no: 0006 01740323 ssb> rsa2048/9535162A78ECD86B created: 2014-06-22 expires: 2017-09-04 card-no: 0006 01740323 ssb> rsa2048/AF6608D736BA8F9B created: 2014-06-22 expires: 2017-09-04 card-no: 0006 01740323 jas@latte:~$

Using the key will not work though.

jas@latte:~$ echo foo|gpg -a --sign gpg: no default secret key: No secret key gpg: signing failed: No secret key jas@latte:~$

This is because the public key and the secret key stub are not available.

jas@latte:~$ gpg --list-keys jas@latte:~$ gpg --list-secret-keys jas@latte:~$

You need to import the key for this to work. I have some vague memory that gpg --card-status was supposed to do this, but I may be wrong.

jas@latte:~$ gpg --recv-keys 9AA9BDB11BB1B99A21285A330664A76954265E8C gpg: failed to start the dirmngr '/usr/bin/dirmngr': No such file or directory gpg: connecting dirmngr at '/run/user/1000/gnupg/S.dirmngr' failed: No such file or directory gpg: keyserver receive failed: No dirmngr jas@latte:~$

Surprisingly, dirmngr is also not shipped by default so it has to be installed manually.

root@latte:~# apt-get install dirmngr

Below I proceed to trust the clouds to find my key.

jas@latte:~$ gpg --recv-keys 9AA9BDB11BB1B99A21285A330664A76954265E8C gpg: key 0664A76954265E8C: public key "Simon Josefsson " imported gpg: no ultimately trusted keys found gpg: Total number processed: 1 gpg: imported: 1 jas@latte:~$

Now the public key and the secret key stub are available locally.

jas@latte:~$ gpg --list-keys /home/jas/.gnupg/pubring.kbx ---------------------------- pub rsa3744 2014-06-22 [SC] [expires: 2017-09-04] 9AA9BDB11BB1B99A21285A330664A76954265E8C uid [ unknown] Simon Josefsson uid [ unknown] Simon Josefsson sub rsa2048 2014-06-22 [S] [expires: 2017-09-04] sub rsa2048 2014-06-22 [E] [expires: 2017-09-04] sub rsa2048 2014-06-22 [A] [expires: 2017-09-04] jas@latte:~$ gpg --list-secret-keys /home/jas/.gnupg/pubring.kbx ---------------------------- sec# rsa3744 2014-06-22 [SC] [expires: 2017-09-04] 9AA9BDB11BB1B99A21285A330664A76954265E8C uid [ unknown] Simon Josefsson uid [ unknown] Simon Josefsson ssb> rsa2048 2014-06-22 [S] [expires: 2017-09-04] ssb> rsa2048 2014-06-22 [E] [expires: 2017-09-04] ssb> rsa2048 2014-06-22 [A] [expires: 2017-09-04] jas@latte:~$

I am now able to sign data with the smartcard, yay!

jas@latte:~$ echo foo|gpg -a --sign -----BEGIN PGP MESSAGE----- owGbwMvMwMHYxl2/2+iH4FzG01xJDJFu3+XT8vO5OhmNWRgYORhkxRRZZjrGPJwQ yxe68keDGkwxKxNIJQMXpwBMRJGd/a98NMPJQt6jaoyO9yUVlmS7s7qm+Kjwr53G uq9wQ+z+/kOdk9w4Q39+SMvc+mEV72kuH9WaW9bVqj80jN77hUbfTn5mffu2/aVL h/IneTfaOQaukHij/P8A0//Phg/maWbONUjjySrl+a3tP8ll6/oeCd8g/aeTlH79 i0naanjW4bjv9wnvGuN+LPHLmhUc2zvZdyK3xttN/roHvsdX3f53yTAxeInvXZmd x7W0/hVPX33Y4nT877T/ak4L057IBSavaPVcf4yhglVI8XuGgaTP666Wuslbliy4 5W5eLasbd33Xd/W0hTINznuz0kJ4r1bLHZW9fvjLduMPq5rS2co9tvW8nX9rhZ/D zycu/QA= =I8rt -----END PGP MESSAGE----- jas@latte:~$

Encrypting to myself will not work smoothly though.

jas@latte:~$ echo foo|gpg -a --encrypt -r simon@josefsson.org gpg: 9535162A78ECD86B: There is no assurance this key belongs to the named user sub rsa2048/9535162A78ECD86B 2014-06-22 Simon Josefsson Primary key fingerprint: 9AA9 BDB1 1BB1 B99A 2128 5A33 0664 A769 5426 5E8C Subkey fingerprint: DC9F 9B7D 8831 692A A852 D95B 9535 162A 78EC D86B It is NOT certain that the key belongs to the person named in the user ID. If you *really* know what you are doing, you may answer the next question with yes. Use this key anyway? (y/N) gpg: signal Interrupt caught ... exiting jas@latte:~$

The reason is that the newly imported key has unknown trust settings. I update the trust settings on my key to fix this, and encrypting now works without a prompt.

jas@latte:~$ gpg --edit-key 9AA9BDB11BB1B99A21285A330664A76954265E8C gpg (GnuPG) 2.1.18; Copyright (C) 2017 Free Software Foundation, Inc. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Secret key is available. pub rsa3744/0664A76954265E8C created: 2014-06-22 expires: 2017-09-04 usage: SC trust: unknown validity: unknown ssb rsa2048/860B7FBB32F8119D created: 2014-06-22 expires: 2017-09-04 usage: S card-no: 0006 01740323 ssb rsa2048/9535162A78ECD86B created: 2014-06-22 expires: 2017-09-04 usage: E card-no: 0006 01740323 ssb rsa2048/AF6608D736BA8F9B created: 2014-06-22 expires: 2017-09-04 usage: A card-no: 0006 01740323 [ unknown] (1). Simon Josefsson [ unknown] (2) Simon Josefsson gpg> trust pub rsa3744/0664A76954265E8C created: 2014-06-22 expires: 2017-09-04 usage: SC trust: unknown validity: unknown ssb rsa2048/860B7FBB32F8119D created: 2014-06-22 expires: 2017-09-04 usage: S card-no: 0006 01740323 ssb rsa2048/9535162A78ECD86B created: 2014-06-22 expires: 2017-09-04 usage: E card-no: 0006 01740323 ssb rsa2048/AF6608D736BA8F9B created: 2014-06-22 expires: 2017-09-04 usage: A card-no: 0006 01740323 [ unknown] (1). Simon Josefsson [ unknown] (2) Simon Josefsson Please decide how far you trust this user to correctly verify other users' keys (by looking at passports, checking fingerprints from different sources, etc.) 1 = I don't know or won't say 2 = I do NOT trust 3 = I trust marginally 4 = I trust fully 5 = I trust ultimately m = back to the main menu Your decision? 5 Do you really want to set this key to ultimate trust? (y/N) y pub rsa3744/0664A76954265E8C created: 2014-06-22 expires: 2017-09-04 usage: SC trust: ultimate validity: unknown ssb rsa2048/860B7FBB32F8119D created: 2014-06-22 expires: 2017-09-04 usage: S card-no: 0006 01740323 ssb rsa2048/9535162A78ECD86B created: 2014-06-22 expires: 2017-09-04 usage: E card-no: 0006 01740323 ssb rsa2048/AF6608D736BA8F9B created: 2014-06-22 expires: 2017-09-04 usage: A card-no: 0006 01740323 [ unknown] (1). Simon Josefsson [ unknown] (2) Simon Josefsson Please note that the shown key validity is not necessarily correct unless you restart the program. gpg> quit jas@latte:~$ echo foo|gpg -a --encrypt -r simon@josefsson.org -----BEGIN PGP MESSAGE----- hQEMA5U1Fip47NhrAQgArTvAykj/YRhWVuXb6nzeEigtlvKFSmGHmbNkJgF5+r1/ /hWENR72wsb1L0ROaLIjM3iIwNmyBURMiG+xV8ZE03VNbJdORW+S0fO6Ck4FaIj8 iL2/CXyp1obq1xCeYjdPf2nrz/P2Evu69s1K2/0i9y2KOK+0+u9fEGdAge8Gup6y PWFDFkNj2YiVa383BqJ+kV51tfquw+T4y5MfVWBoHlhm46GgwjIxXiI+uBa655IM EgwrONcZTbAWSV4/ShhR9ug9AzGIJgpu9x8k2i+yKcBsgAh/+d8v7joUaPRZlGIr kim217hpA3/VLIFxTTkkm/BO1KWBlblxvVaL3RZDDNI5AVp0SASswqBqT3W5ew+K nKdQ6UTMhEFe8xddsLjkI9+AzHfiuDCDxnxNgI1haI6obp9eeouGXUKG =s6kt -----END PGP MESSAGE----- jas@latte:~$

So everything is fine, isn’t it? Alas, not quite.

jas@latte:~$ ssh-add -L The agent has no identities. jas@latte:~$

Tracking this down, I now realize that GNOME’s keyring is used for SSH but GnuPG’s gpg-agent is used for GnuPG. GnuPG uses the environment variable GPG_AGENT_INFO to connect to an agent, and SSH uses the SSH_AUTH_SOCK environment variable to find its agent. The filenames used below leak the knowledge that gpg-agent is used for GnuPG but GNOME keyring is used for SSH.

jas@latte:~$ echo $GPG_AGENT_INFO /run/user/1000/gnupg/S.gpg-agent:0:1 jas@latte:~$ echo $SSH_AUTH_SOCK /run/user/1000/keyring/ssh jas@latte:~$

Here the same recipe as in my previous blog post works. This time GNOME keyring only has to be disabled for SSH. Disabling GNOME keyring is not sufficient, you also need gpg-agent to start with enable-ssh-support. The simplest way to achieve that is to add a line in ~/.gnupg/gpg-agent.conf as follows. When you login, the script /etc/X11/Xsession.d/90gpg-agent will set the environment variables GPG_AGENT_INFO and SSH_AUTH_SOCK. The latter variable is only set if enable-ssh-support is mentioned in the gpg-agent configuration.

jas@latte:~$ mkdir ~/.config/autostart jas@latte:~$ cp /etc/xdg/autostart/gnome-keyring-ssh.desktop ~/.config/autostart/ jas@latte:~$ echo 'Hidden=true' >> ~/.config/autostart/gnome-keyring-ssh.desktop jas@latte:~$ echo enable-ssh-support >> ~/.gnupg/gpg-agent.conf jas@latte:~$

Log out from GNOME and log in again. Now you should see ssh-add -L working.

jas@latte:~$ ssh-add -L ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDFP+UOTZJ+OXydpmbKmdGOVoJJz8se7lMs139T+TNLryk3EEWF+GqbB4VgzxzrGjwAMSjeQkAMb7Sbn+VpbJf1JDPFBHoYJQmg6CX4kFRaGZT6DHbYjgia59WkdkEYTtB7KPkbFWleo/RZT2u3f8eTedrP7dhSX0azN0lDuu/wBrwedzSV+AiPr10rQaCTp1V8sKbhz5ryOXHQW0Gcps6JraRzMW+ooKFX3lPq0pZa7qL9F6sE4sDFvtOdbRJoZS1b88aZrENGx8KSrcMzARq9UBn1plsEG4/3BRv/BgHHaF+d97by52R0VVyIXpLlkdp1Uk4D9cQptgaH4UAyI1vr cardno:000601740323 jas@latte:~$

Topics for further discussion or research include 1) whether scdaemon, dirmngr and/or pcscd should be pre-installed on Debian desktop systems; 2) whether gpg --card-status should attempt to import the public key and secret key stub automatically; 3) why GNOME keyring is used by default for SSH rather than gpg-agent; 4) whether GNOME keyring should support smartcards, or if it is better to always use gpg-agent for GnuPG/SSH, 5) if something could/should be done to automatically infer the trust setting for a secret key.

Enjoy!

Categories: FLOSS Project Planets

Tomasz Früboes: Python 3.6 – Critical Mass Reached?

Planet Python - Sun, 2017-06-18 17:03

For a while there was a slightly uncomfortable situation inside the python ecosystem. On the one hand, any newcomer in the python world is rather soon exposed to the “there should be one– and preferably only one –obvious way to do it” philosophy. On the other hand, a brutal violation of this philosophy can be seen, when one visits the python download page. Should I get 2.7 or 3 something? Tough choice…

For me, as for lots of python users, it wasn’t really a problem. I have always used python 2.7, kinda for one silly reason – the way how the print statement was a statement, and not a function (which, as you probably know, was changed in the 3 series). Any new features that went into the 3 series, weren’t enough to make me change my habit. As you probably suspect already, this was up until now.

You may wonder what is the killer feature that made me that enthusiastic about python 3? There are actually two of them. The first one is the ability to pinpoint memory leaks with the tracemalloc module. It is present in the 3 series for some time (with some painful setup is usable also in 2.7), but alone wasn’t enough to make me consider python 3. The second one was added in the latest 3.6 release – formatted string literals. I’ll cover both of them in this post.

The tracemalloc module

If you want to optimize your code for execution speed story was dead simple for a long time. You import cProfile, run your code under it and view results (e.g. using snakeviz). If you need finer granularity of results on the source code level (i.e. measurements for given line in your source file instead of function level stats) you go for the ‘line_prof’ package. Easy and efficient.

To learn where and how much memory was allocated, you could use the memory_profiler package. The printout was informative, but measurement came with a significant cost of slower code execution (in the snippet shown below I’ve measured the slowdown to be of the order of 20 times). The situation got better with the introduction of the tracemalloc module, where the slowdown is significantly lower (2x measured on the same code). As usual, we’ll see the usage with an example:

#! /usr/bin/env python import tracemalloc tracemalloc.start() def main(): test = {} for x in range(10000): test[x] = str(x)*100 snapshot = tracemalloc.take_snapshot() for stat in snapshot.statistics('lineno')[:5]: print(stat) if __name__ == "__main__": main()

Running this should yields the following output:

test_tm.py:8: size=4564 KiB, count=10001, average=467 B test_tm.py:7: size=266 KiB, count=9743, average=28 B test_tm.py:5: size=712 B, count=2, average=356 B

which immediately tells us, what is the line causing most of the memory allocations.

Tracemalloc is also available for python 2.7, but installation requires recompiling python from source with some patches applied. Not very complicated but time-consuming enough to prevent me from doing it apart from one single occasion when I was up against a wall. So – having tracemalloc as a standard module of python 3 gives you a great deal of functionality without any struggle.

Formatted string literals

The second feature I would like to advertise today is string interpolation also known as formatted string literals, added in the latest python release (3.6). The idea is quite simple but very convenient and powerful – now you can embed python expressions inside the string literals. Among others, this allows you to put local variables into string without calling the ‘format’ function or using ‘%’ notation. The following example illustrates this:

def main(): val1 = 1 val2 = 3 val3 = Exception("Whoops!") todo = [f"Reference local variable: {val1}", f"Divison of two variables: {val1/val2}", f"Divison with format specifier: {val1/val2:.2f}", f"Function calls are OK: {list(map(lambda x: x*x, [1,2,3,4]))}", f"Exception caught: {val3} # standard, using str", f"Exception caught: {val3!r} # using repr"] for t in todo: print(t) if __name__ == "__main__": main()

This should produce output as follows:

Reference local variable: 1 Divison of two variables: 0.3333333333333333 Divison with format specifier: 0.33 Function calls are OK: [1, 4, 9, 16] Exception caught: Whoops! # standard, using str Exception caught: Exception('Whoops!',) # using repr

The used notation is pretty neat and compact. All you have to do is start your string definition with ‘f’ and embed any number of valid python expressions inside curly braces. You can also use standard format definitions (as you would do using the ‘format’ function after a colon; see line 7 in the source code above). It is worth noting, that you can also control the way given variable is converted to a string. By default, the ‘str’ function is used. You can force python to use ‘repr’ (or ‘ascii’) functions by adding !r (or !a for ‘ascii’) after the expression, for example ‘{val3!r}’ (this is what we did in line 10 of the example above). One last thing worth noting is that you cannot use the ‘!’ and ‘:” characters inside your embedded expressions (given their special purpose). The only exception is the `!=` operator. Edit: you can actually use the ‘!’ and ‘:’ characters inside your expression as long as you nest them inside of parentheses (as in line 8 of the example above).

Overall, the string interpolation was a feature, that I was missing in python from the moment I was first introduced to the idea when doing some experimentation with scala programming language.

Wrap up

“Critical mass” is a term that may refer to several different phenomena. In physics it means the (smallest) amount of material allowing a sustained nuclear chain reaction. The situation with the number of new features added to the python 3 (and deliberately not added to the 2 series) reminds me of this term in its first, physics related, meaning. With this release, the amount of goodies that I would miss by sticking to the 2.7 release is simply too large. I guess it may be the same also for others, maybe to a point, that soon we start seeing a growing number of python 3 only packages uploaded to pypi. So it’s really the moment to say ‘thank you’ to 2.7, and move to the 3 series.

p.s. Python 3.6 is more than 6 months old now. I’ve just recently learned about the string interpolation feature by watching this great presentation from pycon2017:

Categories: FLOSS Project Planets

tryexceptpass: That varies quite a bit depending on what you’re doing and how you need to use the result that you…

Planet Python - Sun, 2017-06-18 15:45

That varies quite a bit depending on what you’re doing and how you need to use the result that you’re waiting for. The main mechanisms to handle those situations are:

  • If your calling a function that returns a result and you have to process that result (like in your example), wrap it in another async function and await your async method in there, then do the processing.
  • Use a callback function that your async method will execute when it finishes, passing the result there.
  • Use shared storage. There are various thread-safe systems for this like asyncio.Queue if you have to go across threads.
Categories: FLOSS Project Planets

Alexander Wirt: alioth needs your help

Planet Debian - Sun, 2017-06-18 15:06

It may look that the decision for pagure as alioth replacement is already finalized, but that’s not really true. I got a lot of feedback and tips in the last weeks, those made postpone my decision. Several alternative systems were recommended to me, here are a few examples:

and probably several others. I won’t be able to evaluate all of those systems in advance of our sprint. That’s where you come in: if you are familiar with one of those systems, or want to get familiar with them, join us on our mailing list and create a wiki page below https://wiki.debian.org/Alioth/GitNext with a review of your system.

What do we need to know?

  • Feature set compared to current alioth
  • Feature set compared to a popular system like github
  • Some implementation designs
  • Some information about scaling (expect something like 15.000 > 25.000 repos)
  • Support for other version control systems
  • Advantages: why should we choose that system
  • Disadvantages: why shouldn’t we choose that system
  • License
  • Other interesting features
  • Details about extensibility
  • A really nice thing would be a working vagrant box / vagrantfile + ansible/puppet to test things

If you want to start on such a review, please announce it on the mailinglist.

If you have questions, ask me on IRC, Twitter or mail. Thanks for your help!

Categories: FLOSS Project Planets

Chris Warrick: Unix locales vs Unicode (‘ascii’ codec can’t encode character…)

Planet Python - Sun, 2017-06-18 14:40

You might get unusual errors about Unicode and inability to convert to ASCII. Programs might just crash at random. Those are often simple to fix — all you need is correct locale configuration.

Has this ever happened to you?

Traceback (most recent call last): File "aogonek.py", line 1, in <module> print(u'\u0105') UnicodeEncodeError: 'ascii' codec can't encode character '\u0105' in position 0: ordinal not in range(128) Nikola: Could not guess locale for language en, using locale C Input: ą Desired ascii(): '\u0105' Real ascii(): '\udcc4\udc85' perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: [...] are supported and installed on your system. perl: warning: Falling back to the standard locale ("C").

All those errors have the same root cause: incorrect locale configuration. To fix them all, you need to generate the missing locales and set them.

Check currently used locale

The locale command (without arguments) should tell you which locales you’re currently using. (The list might be shorter on your end)

$ locale LANG="en_US.UTF-8" LC_CTYPE="en_US.UTF-8" LC_NUMERIC="en_US.UTF-8" LC_TIME="en_US.UTF-8" LC_COLLATE="en_US.UTF-8" LC_MONETARY="en_US.UTF-8" LC_MESSAGES="en_US.UTF-8" LC_PAPER="en_US.UTF-8" LC_NAME="en_US.UTF-8" LC_ADDRESS="en_US.UTF-8" LC_TELEPHONE="en_US.UTF-8" LC_MEASUREMENT="en_US.UTF-8" LC_IDENTIFICATION="en_US.UTF-8" LC_ALL=

If any of those is set to C or POSIX, has a different encoding than UTF-8 (sometimes spelled utf8) is empty (with the exception of LC_ALL), or if you see any errors, you need to reconfigure your locale.

Check locale availability and install missing locales

The first thing you need to do is check locale availability. To do this, run locale -a. This will produce a list of all installed locales. You can use grep to get a more reasonable list.

$ locale -a | grep -i utf <lists all UTF-8 locales> $ locale -a | grep -i utf | grep -i en_US en_US.UTF-8

The best locale to use is the one for your language, with the UTF-8 encoding. The locale will be used by some console apps for output. I’m going to use en_US.UTF-8 in this guide.

If you can’t see any UTF-8 locales, or no appropriate locale setting for your language of choice, you might need to generate those. The required actions depend on your distro/OS.

  • Debian, Ubuntu, and derivatives: install language-pack-en-base, run sudo dpkg-reconfigure locales
  • RHEL, CentOS, Fedora: install glibc-langpack-en
  • Arch Linux: uncomment relevant entries in /etc/locale.gen and run sudo locale-gen (wiki)
  • For other OSes, refer to the documentation.

You need a UTF-8 locale to ensure compatibility with software. Avoid the C and POSIX locales (it’s ASCII) and locales with other encodings (those aren’t used by ~anyone these days)

Configure system-wide

On some systems, you may be able to configure locale system-wide. Check your system documentation for details. If your system has systemd, run

sudo localectl set-locale LANG=en_US.UTF-8 Configure for a single user

If your environment does not allow system-wide locale configuration (macOS, shared server with generated but unconfigured locales), or if you want to ensure it’s always configured independently of system settings.

To do this, you need to edit the configuration file for your shell. If you’re using bash, it’s .bashrc (or .bash_profile on macOS). For zsh users, .zshrc. Add this line (or equivalent in your shell):

export LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8

That should be enough. Note that those settings don’t apply to programs not launched through a shell.

Python/Windows corner: Python 3.7 will fix this on Unix by assuming UTF-8 if it encounters the C locale. On Windows, Python 3.6 is using UTF-8 interactively, but not when using shell redirections to files or pipes.

This post was brought to you by ą — U+0105 LATIN SMALL LETTER A WITH OGONEK.

Categories: FLOSS Project Planets

GSoC: Weekly Blog

Planet KDE - Sun, 2017-06-18 14:06

Hi

There’s a lot I did in the last 2 weeks and since I did not update the blog last week, this post is going to include last 2 week’s progress.

Before I begin with what I did, here’s a quick review of what I was working on and what had been done.

I started porting Cantor’s Qalculate backend to QProcess and during the first week I worked on establishing connection with Qalculate, for which we use qalc and some amount of time was spent parsing the output returned by qalc

 

Qalculate backend as of now uses libqalcuate API for computing the result. To successfully eliminate the direct use of API all the commands should make use qalc, but since qalc does not support all the functions of Qalculate, I had to segerate the parts depending on API from qalc. For instance, qalc does not support plotting graphs.

The version of qalc that we are using supports almost all the major functionalities of Qalculate but there are a few things for which we still depend on the API directly

I will quickly describe what depends on what

API
* help command
* plotting
* syntax highlighter
* tab completion

qalc
* basic calculations: addition, subtraction etc
* all the math functions provided by Qalculate: sqrt(), binomial(), integrate() etc
* saving variables

Segregating part was easy. The other important thing I did was to form a queue based system for the commands that are required to be processed by qalc

 

Queue based system

The two important components of this system are:

1. Expression Queue :- contains the expressions to be processed
2. Command Queue:- contains commands of the expression being processed.

* The basic idea behind this system is , we compute only one expression at a time, mean while if we get more expressions from user, we store them in the queue and process them once the current expression being processed is complete.

* Another important point is , since an expression can contain multiple commands, we store all the commands for an expression in command queue and just like we process one expression at a time, we are going to process one command at a time. i.e we are going to give QProcess only one command at a time, this makes the output returned by QProcess less messy and hence it’s easier to parse

* Example: Expression1 = (10+12, sqrt(12)) : this expression has multiple commands. The command queue for the same will have two commands.

expression queue                                               command queue

[ expression 1 ] ——————————————-> [ 10+12 ], [ sqrt(12) ]

[ expression 2] ——————————————-> [ help plot ]

 

We solve all the commands of expression1 , parse the output and then move on to expression2 and this goes on till the time expression queue is empty

 

Apart from this I worked on Variable model of Qalculate. Qalc provides a lot of variations of save command. Different commands available are:

Not every command mentioned below has been implemented but the important ones have been.

1. save(value, variable, category, title): Implemented

This function is available through the qalc interface and allows the user to define new variables or override the existing variables with the given value.

2. save/store variable : Implemented
This command allows the user to save the current result in a variable with the specified name.

Current result is the last computed result. Using qalc we can access the last result using ‘ans’, ‘answer’ and a few more variables.

3.save definitions : Not implemented

Definitions include the user defined variables, functions, units .

4. save mode: Not implemented
mode is the configuration of the user which include things like ‘angle unit’, ‘multiplication sign’ etc.

 

With this most of the important functionalities have been ported to qalc but there are still a few things for which we depend on the API directly. Hopefully, in the future with the newer version of qalc we will be able to remove the direct use of API from Cantor

Thanks and Happy hacking


Categories: FLOSS Project Planets

Eriberto Mota: Como migrar do Debian Jessie para o Stretch

Planet Debian - Sun, 2017-06-18 13:58

Bem vindo ao Debian Stretch!

Ontem, 17 de junho de 2017, o Debian 9 (Stretch) foi lançado. Eu gostaria de falar sobre alguns procedimentos básicos e regras para migrar do Debian 8 (Jessie).

Passos iniciais
  • A primeira coisa a fazer é ler a nota de lançamento. Isso é fundamental para saber sobre possíveis bugs e situações especiais.
  • O segundo passo é atualizar o Jessie totalmente antes de migrar para o Stretch. Para isso, ainda dentro do Debian 8, execute os seguintes comandos:
# apt-get update # apt-get dist-upgrade Migrando
  • Edite o arquivo /etc/apt/sources.list e altere todos os nomes jessie para stretch. A seguir, um exemplo do conteúdo desse arquivo (poderá variar, de acordo com as suas necessidades):
deb http://ftp.br.debian.org/debian/ stretch main deb-src http://ftp.br.debian.org/debian/ stretch main                                                                                                                                  deb http://security.debian.org/ stretch/updates main deb-src http://security.debian.org/ stretch/updates main
  • Depois, execute:
# apt-get update # apt-get dist-upgrade

Caso haja algum problema, leia as mensagens de erro e tente resolver o problema. Resolvendo ou não tal problema, execute novamente o comando:

# apt-get dist-upgrade

Havendo novos problemas, tente resolver. Busque soluções no Google, se for necessário. Mas, geralmente, tudo dará certo e você não deverá ter problemas.

Alterações em arquivos de configuração

Quando você estiver migrando, algumas mensagens sobre alterações em arquivos de configuração poderão ser mostradas. Isso poderá deixar alguns usuários pedidos, sem saber o que fazer. Não entre em pânico.

Existem duas formas de apresentar essas mensagens: via texto puro em shell ou via janela azul de mensagens. O texto a seguir é um exemplo de mensagem em shell:

Ficheiro de configuração '/etc/rsyslog.conf' ==> Modificado (por si ou por um script) desde a instalação. ==> O distribuidor do pacote lançou uma versão atualizada. O que deseja fazer? As suas opções são: Y ou I : instalar a versão do pacote do maintainer N ou O : manter a versão actualmente instalada D : mostrar diferenças entre as versões Z : iniciar uma shell para examinar a situação A ação padrão é manter sua versão atual. *** rsyslog.conf (Y/I/N/O/D/Z) [padrão=N] ?

A tela a seguir é um exemplo de mensagem via janela:

Nos dois casos, é recomendável que você escolha por instalar a nova versão do arquivo de configuração. Isso porque o novo arquivo de configuração estará totalmente adaptado aos novos serviços instalados e poderá ter muitas opções novas ou diferentes. Mas não se preocupe, pois as suas configurações não serão perdidas. Haverá um backup das mesmas. Assim, para shell, escolha a opção "Y" e, no caso de janela, escolha a opção "instalar a versão do mantenedor do pacote". É muito importante anotar o nome de cada arquivo modificado. No caso da janela anterior, trata-se do arquivo /etc/samba/smb.conf. No caso do shell o arquivo foi o /etc/rsyslog.conf.

Depois de completar a migração, você poderá ver o novo arquivo de configuração e o original. Caso o novo arquivo tenha sido instalado após uma escolha via shell, o arquivo original (o que você tinha anteriormente) terá o mesmo nome com a extensão .dpkg-old. No caso de escolha via janela, o arquivo será mantido com a extensão .ucf-old. Nos dois casos, você poderá ver as modificações feitas e reconfigurar o seu novo arquivo de acordo com as necessidades.

Caso você precise de ajuda para ver as diferenças entre os arquivos, você poderá usar o comando diff para compará-los. Faça o diff sempre do arquivo novo para o original. É como se você quisesse ver como fazer com o novo arquivo para ficar igual ao original. Exemplo:

# diff -Naur /etc/rsyslog.conf /etc/rsyslog.conf.dpkg-old

Em uma primeira vista, as linhas marcadas com "+" deverão ser adicionadas ao novo arquivo para que se pareça com o anterior, assim como as marcadas com "-" deverão ser suprimidas. Mas cuidado: é normal que haja algumas linhas diferentes, pois o arquivo de configuração foi feito para uma nova versão do serviço ou aplicativo ao qual ele pertence. Assim, altere somente as linhas que realmente são necessárias e que você mudou no arquivo anterior. Veja o exemplo:

+daemon.*;mail.*;\ + news.err;\ + *.=debug;*.=info;\ + *.=notice;*.=warn |/dev/xconsole +*.* @sam

No meu caso, originalmente, eu só alterei a última linha. Então, no novo arquivo de configuração, só terei interesse em adicionar essa linha. Bem, se foi você quem fez a configuração anterior, você saberá fazer a coisa certa. Geralmente, não haverá muitas diferenças entre os arquivos.

Outra opção para ver as diferenças entre arquivos é o comando mcdiff, que poderá ser fornecido pelo pacote mc. Exemplo:

# mcdiff /etc/rsyslog.conf /etc/rsyslog.conf.dpkg-old Problemas com ambientes e aplicações gráficas

É possível que você tenha algum problema com o funcionamento de ambientes gráficos, como Gnome, KDE etc, ou com aplicações como o Mozilla Firefox. Nesses casos, é provável que o problema seja os arquivos de configuração desses elementos, existentes no diretório home do usuário. Para verificar, crie um novo usuário no Debian e teste com ele. Se tudo der certo, faça um backup das configurações anteriores (ou renomeie as mesmas) e deixe que a aplicação crie uma configuração nova. Por exemplo, para o Mozilla Firefox, vá ao diretório home do usuário e, com o Firefox fechado, renomeie o diretório .mozilla para .mozilla.bak, inicie o Firefox e teste.

Está inseguro?

Caso você esteja muito inseguro, instale um Debian 8, com ambiente gráfico e outras coisas, em uma máquina virtual e migre para Debian 9 para testar e aprender. Sugiro VirtualBox como virtualizador.

Divirta-se!

 

Categories: FLOSS Project Planets

Shawn McKinney: 2017 Dirty Kanza Checkpoint Three

Planet Apache - Sun, 2017-06-18 13:26

Note: This post is about my second Dirty Kanza 200 experience on June 3, 2017.

It’s broken into seven parts:

Part I – Prep / Training

Part II – Preamble

Part III – Starting Line

Part IV – Checkpoint One

Part V – Checkpoint Two

Part VI – Checkpoint Three

Part VII – Finish Line

Don’t Worry Be Happy

My thoughts as I roll out of Eureka @ 3:30pm…

  • Thirty minutes at a checkpoint is too long, double the plan, but was overheated and feel much better now.
  • I’m enjoying myself.
  • It’s only a hundred miles back to Emporia, I could do that in my sleep.
  • What’s that a storm cloud headed our way?  It’s gonna feel good when it gets here.
Mud & Camaraderie

That first century was a frantic pace and there’s not much time or energy for team building.  We help each other out, but it’s all business.

The second part is when stragglers clump into semi-cohesive units.   It’s only natural and in any case, foolish to ride alone.  A group of riders will always be safer than one, assuming everyone does their job properly.  Each new set of eyes brings another brain to identify and solve problems.

There’s Jim, who took a few years off from his securities job down in Atlanta, Georgia to help his wife with their Montessori school, and train for this race.  He and I teamed up during the first half of the third leg.  As the worst of the thunderstorms rolled over.

Before we crossed the US hiway 54, a rider was waiting to be picked up by her support team.  Another victim of muddy roads, a derailleur twisted, bringing an early end to a long day.  We stopped, checked and offered encouragement as a car whizzed by us.

“That’s a storm chaser!!”, someone called out, leaving me to wonder just how bad these storms were gonna get.

Derrick, is an IT guy from St. Joseph, Missouri, riding a single-speed bike on his way to a fifth finish, and with it a Goblet commemorating 1000 miles of toil.

We rode for a bit at the end of the third, right at dusk.  My GPS, up to now worked flawlessly had changed into the nightime display mode and I could no longer make out which lines to follow, missed a turn and heard the buzzer telling me I’d veered off course.

I stopped and pulled out my cue sheets.  Those were tucked safely and sealed to stay nice and dry.  What, I forgot to seal, its pages wet, stuck together and useless?

I was tired and let my mind drift.  Why didn’t I bring a headlamp on this leg?  I’d be able to read the nav screen better.  And where is everybody?  How long have I been on the wrong path?  Am I lost?

Be calm.  Get your focus and above all think.  What about the phone, maps are on it too.  It’s almost dead but plenty of reserve power available.

Just then Derrick’s dim headlight appeared in the distance.  He stopped and we quietly discussed my predicament.  For some reason his GPS device couldn’t figure that turn out either.  It was then we noticed tire tracks off to our right, turned and got back on track, both nav devices mysteriously resumed working once again.

Jeremy is the service manager at one of the better bike shops in Topeka, Kansas.  He’s making a third attempt.  Two years ago, he broke down in what turned into a mudfest.  Last year, he completed the course, but twenty minutes past due and didn’t make the 3:00 am cutoff.

His bike was a grinder of sorts with some fats.  It sounded like a Mack truck on the downhills, but geared like a mountain goat on the uphills.  I want one of them bikes.  Going to have to look him up at that bike shop one day.

Last year I remembered him lying at the roadside, probably ten maybe fifteen miles outside of Emporia.

“You alright?”, we stopped and asked.  It was an hour or more past midnight and the blackest of night.

“Yeah man, just tired, and need to rest a bit.  You guys go on, I’m fine”, he calmly told us.

There’s the guy from Iowa, who normally wouldn’t be at the back-of-the-pack (with us), but his derailleur snapped and he’d just converted to a single-speed as I caught up with him, and his buddy.  This was a first attempt for both.  They’d been making good until the rains hit.

Or the four chicks, from where I do not know, who were much faster than I, but somehow kept passing me.  How I would get past them again remains a mystery.

Also, all of the others, whose names can’t be placed, but the stories can…

Storms

 

Seven miles into that third leg came the rain.  It felt good, but introduced challenges.  The roads become slippery and a rider could easily go down.  They become muddy and the bike very much wants to break down.

Both are critical risk factors in terms of finishing.  One’s outcome much worse than the other.

Fortunately, both problems have good solutions.  The first, slow down the descents, pick through the rocks, pools of mud and water — carefully.  If in doubt stop and walk a section, although I never had to on this day, except for that one crossing with peanut butter on the other side.

By the way, these pictures that I’m posting are from the calmer sections.  It’s never a good idea to stop along a dangerous roadside just to take one.  That will create a hazard for the other riders, who then have to deal with you in their pathways which limits their choices for a good line.  When the going is tricky, keep it moving, if possible to do so safely.

The second problem means frequent stops to flush the grit from the drivetrains.  When it starts grinding, it’s time to stop and flush.  Mind the grind.  Once I pulled out two centimeter chunks of rocks lodged in the derailleurs and chain guards.

Use whatever is on hand.  River, water, bottles, puddles.  There was mud — everywhere.  In the chain, gears and brakes.  It’d get lodged in the pedals and cleats of our shoes making it impossible to click in or (worse) to click out.  I’d use rocks to remove other rocks or whatever is handy and/or expedient.  It helps to be resourceful at times like this.  That’s not a fork, it’s an extended, multi-pronged, mud and grit extraction tool.

The good folks alongside the road were keeping us supplied with plenty of water.  It wasn’t needed for hydration, but for maintenance.  I’d ask before using it like this, to not offend them.  Pouring their bottles of water over my bike, but they understood and didn’t seem to mind.

We got rerouted once because the water crossing decided it wanted to be a lake.  This detour added a couple of miles to a ride that was already seven over two hundred.

The rain made for slow but I was having a good time and didn’t want the fun to end.

Enjoy this moment.  Look over there, all the flowers growing alongside the road.  The roads were still muddy but the fields were clean and fresh, the temperatures were cool.

wild flowers along the third leg

Madison (once again)

Rolled in about 930p under the cover of night.

930p @ Madison CP3

After all that fussing over nameplates in the previous leg and found out it was mounted incorrectly.  It partially blocked the headlight beam and had to be fixed.

Cheri lends a hand remounting the nameplate so I can be a happy rider

It was Cheri’s second year doing support.  Last year it was her and Kelly crewing for Gregg and I.  This year, she and Gregg came as well.  As I said earlier, the best part of this race is experiencing it with friends and family.

I was in good spirits, but hungry, my neck ached, and my bike was in some serious need of attention.  All of this was handled with calm efficiency by Kelly & Co.

Kyle, who’s an RN, provided medical support with pain relievers and ice packs.  They knew I liked pizza late in the race and Gregg handed some over that had just been pulled fresh from the oven, across the street, at the EZ-mart. It may not sound like much now, but gave me the needed energy boost, from something that doesn’t get squeezed out of a tube.

As soon as Cheri finished the nameplate, Gregg got the drivetrain running smoothly once again.

All the while, Kelly and Mom were assisting and directing.  There’s the headlamp needing to be mounted, fresh battery packs, change to the clear lens on the glasses, socks, gloves, cokes, energy drinks, refilling water tanks, electrolytes, gels and more.  There’s forty-some to go, total darkness, unmarked roads.  Possibly more mud on the remaining B roads.  Weather forecast clear and mild.

Let’s Finish This

“Who are you riding with?”, Gregg called out as I was leaving.  He ran alongside for a bit, urging me on.

Gregg runs alongside as I leave CP3

“Derrick and I are gonna team up”, I called back, which was true, that was the plan as we rolled into town.  Now I just had to find him.  Madison was practically deserted at this hour, its checkpoint regions, i.e. red, green, blue, orange, were spread out, and what color did he say he was again??

 

Twenty two minutes spent refueling at checkpoint three and into the darkness again.  That last leg started @ 10 pm with 45 miles to go.  I could do that in my sleep, may need to.

Next Post: Part VII – Finish Line


Categories: FLOSS Project Planets
Syndicate content