FLOSS Project Planets

Removing packages and configurations with apt-get

LinuxPlanet - Mon, 2014-08-18 10:45

Yesterday while re-purposing a server I was removing packages with apt-get and stumbled upon an interesting problem. After I removed the package and all of it's configurations, the subsequent installation did not re-deploy the configuration files.

After a bit of digging I found out that there are two methods for removing packages with apt-get. One of those method should be used if you want to remove binaries, and the other should be used if you want to remove both binaries and configuration files.

What I did

Since the method I originally used caused at least 10 minutes of head scratching; I thought it would be useful to share what I did and how to resolve it.

On my system the package I wanted to remove was supervisor which is pretty awesome btw. To remove the package I simply removed it with apt-get remove just like I've done many times before.

# apt-get remove supervisor Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: supervisor 0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded. After this operation, 1,521 kB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 14158 files and directories currently installed.) Removing supervisor ... Stopping supervisor: supervisord. Processing triggers for ureadahead ...

No issues so far, the package was removed according to apt without any issues. However, after looking around a bit I noticed that the /etc/supervisor directory still existed. As well as the supervisord.conf file.

# ls -la /etc/supervisor total 12 drwxr-xr-x 2 root root 4096 Aug 17 19:44 . drwxr-xr-x 68 root root 4096 Aug 17 19:43 .. -rw-r--r-- 1 root root 1178 Jul 30 2013 supervisord.conf

Considering I was planning on re-installing supervisor and I didn't want to cause any weird configuration issues as I moved from one server role to another I did what any other reasonable Sysadmin would do. I removed the directory...

# rm -Rf /etc/supervisor

I knew the supervisor package was removed, and I assumed that the package didn't remove the config files to avoid losing custom configurations. In my case I wanted to start over from scratch, so deleting the directory sounded like a reasonable thing.

# apt-get install supervisor Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: supervisor 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/314 kB of archives. After this operation, 1,521 kB of additional disk space will be used. Selecting previously unselected package supervisor. (Reading database ... 13838 files and directories currently installed.) Unpacking supervisor (from .../supervisor_3.0b2-1_all.deb) ... Processing triggers for ureadahead ... Setting up supervisor (3.0b2-1) ... Starting supervisor: Error: could not find config file /etc/supervisor/supervisord.conf For help, use /usr/bin/supervisord -h invoke-rc.d: initscript supervisor, action "start" failed. dpkg: error processing supervisor (--configure): subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: supervisor E: Sub-process /usr/bin/dpkg returned an error code (1)

However, it seems supervisor could not start after re-installing.

# ls -la /etc/supervisor ls: cannot access /etc/supervisor: No such file or directory

There is good reason why supervisor wouldn't restart; because the /etc/supervisor/supervisord.conf file was missing. Shouldn't the package installation deploy the supervisord.conf file? Well, technically no. Not with the way I removed the supervisor package.

Why it didn't work How remove works

If we look at apt-get's man page a little closer we can see why the configuration files are still there.

remove remove is identical to install except that packages are removed instead of installed. Note that removing a package leaves its configuration files on the system.

As the manpage clearly says, remove will remove the package but leaves configuration files in place. This explains why the /etc/supervisor directory was lingering after removing the package; but it doesn't explain why a subsequent installation doesn't re-deploy the configuration files.

Package States

If we use dpkg to look at the supervisor package, we will start to see the issue.

# dpkg --list supervisor Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad) ||/ Name Version Architecture Description +++-================================-=====================-=====================-======================================== rc supervisor 3.0b2-1 all A system for controlling process state

With the dpkg package manager a package can have more states than just being installed or not-installed. In fact there are several package states with dpkg.

  • not-installed - The package is not installed on this system
  • config-files - Only the configuration files are deployed to this system
  • half-installed - The installation of the package has been started, but not completed
  • unpacked - The package is unpacked, but not configured
  • half-configured - The package is unpacked and configuration has started but not completed
  • triggers-awaited - The package awaits trigger processing by another package
  • triggers-pending - The package has been triggered
  • installed - The packaged is unpacked and configured OK

If you look at the first column of the dpkg --list it shows rc. The r in this column means the package is remove, which as we saw above means the configuration files are left on the system. The c in this column shows that the package is in the state of config-files. Meaning, only the configuration files are deployed on this system.

When running apt-get install the apt package manager will lookup the current state of the package, when it sees that the package is already in the config-files state it simply skips the configuration file portion of the package installation. Since I manually removed the configuration files outside of the apt or dpkg process the configuration files are gone and will not be deployed with a simple apt-get install.

How to resolve it and remove configurations properly Purging the package from my system

At this point, I found myself with a broken installation of supervisor. Luckily, we can fix the issue by using the purge option of apt-get.

# apt-get purge supervisor Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: supervisor* 0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded. 1 not fully installed or removed. After this operation, 1,521 kB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 14158 files and directories currently installed.) Removing supervisor ... Stopping supervisor: supervisord. Purging configuration files for supervisor ... dpkg: warning: while removing supervisor, directory '/var/log/supervisor' not empty so not removed Processing triggers for ureadahead ... Purge vs Remove

The purge option of apt-get is similar to the remove function however with one difference. The purge option will remove both the package and configurations. After running apt-get purge we can see that the package was fully removed by running dpkg --list again.

# dpkg --list supervisor dpkg-query: no packages found matching supervisor Re-installation without error

Now that the package has been fully purged, and the state of it is now not-installed; we can re-install without errors.

# apt-get install supervisor Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: supervisor 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/314 kB of archives. After this operation, 1,521 kB of additional disk space will be used. Selecting previously unselected package supervisor. (Reading database ... 13833 files and directories currently installed.) Unpacking supervisor (from .../supervisor_3.0b2-1_all.deb) ... Processing triggers for ureadahead ... Setting up supervisor (3.0b2-1) ... Starting supervisor: supervisord. Processing triggers for ureadahead ...

As you can see from the output above, the supervisor package has been installed and started. If we check the /etc/supervisor directory again we can also see the necessary configuration files.

# ls -la /etc/supervisor/ total 16 drwxr-xr-x 3 root root 4096 Aug 17 19:46 . drwxr-xr-x 68 root root 4096 Aug 17 19:46 .. drwxr-xr-x 2 root root 4096 Jul 30 2013 conf.d -rw-r--r-- 1 root root 1178 Jul 30 2013 supervisord.conf You should probably just use purge in most cases

After running into this issue I realized, most of the times I ran apt-get remove I really wanted the functionality of apt-get purge. While it is nice to keep configurations handy in case we need them after re-installation, using remove all the time also leaves random config files to clutter your system. Free to cause configuration issues when packages are removed then re-installed.

In the future I will most likely default to apt-get purge.


Originally Posted on BenCane.com: Go To Article
Categories: FLOSS Project Planets

Katie Cunningham: What is a tech reader?

Planet Python - Mon, 2014-08-18 10:44

If you write a tech book, eventually, you’ll be asked to find tech readers. It may not even be for your book! I’ve been asked to find tech readers for other people’s books several times, especially if the book is geared towards beginners. I teach, so naturally, I know more than a few new coders.

But… what is a tech reader?

The job

Simply put, the tech reader is the person who reads the book with an eye to technical accuracy. Grammar, layout, spelling: None of these are your bag. You make sure that the explanations make sense. You run the code to make sure it works. You point out if the author has glossed over something major, or if they’re using something that hasn’t been used yet.

The level of experience you need varies. A book should always have some tech readers who are the intended audience (so, possibly, beginners), but you also need experts to read the book. A beginner will pick up when they get confused more easily than an expert, but an expert is more likely to point out when you’re incorrect, or leading someone down the wrong path.

For example, Doug Hellmann was one of my expert technical readers for Teach Yourself Python in 24 Hours. Back in the day, I had planned a chapter on pickles (because I had to have 24 chapters and was having trouble with what should go in the middle of the book). He was the one that suggested that I should probably just teach JSON instead.

My beginners chimed in when they felt I was going too fast, and were experts at noticing when I was using something I hadn’t explained yet (like showing a for loop while teaching lists, even though I wasn’t covering for loops until the next chapter). Stuff like that is difficult for an expert to pick up on because, well, for loops and such come naturally to us.

The pay

I’m going to be frank about this: The pay is not great. Some places will toss a bit of money at you (around a few hundred dollars) while others will offer you a copy of the book. It not only varies by publishing house but by individual book. Some books simply end up with more of a budget for readers. A book that needs both beginners and experts will have more money allotted to technical readers than one aimed just at experts.

So why do it?

If the pay sucks but it’s going to take you a while to do, why would you bother to be a tech reader?

The biggest one: If you want to be a writer, this is a great way to get your name on the list. You get to chat with the editors and other authors as well as show off your chops. There are other ways to get your foot in the door, but when it comes to breaking into the writing scene, I recommend knocking on all the doors you can find.

Sometimes, you just want to do good turn for someone in the community. I have tech-read books because I consider someone a friend, and I want their book to be as awesome as possible.

It’s also a wonderful learning opportunity, if you happen to be a novice. Not only do you have a book, but you have access to the author. Not clear on a point? Shoot off an email! During the tech review, my job was to basically sit around and wait for my readers to email me with questions.

Finally, it helps create a better book. We always need more tech books. I know, it seems like there’s already a ton of tech books out there. Unlike a novel, though, tech books have a very short lifespan. Within a few years, they’re out of date, and a few years after that, they’re often useless. We need a stream of new books and updated books to help spread ideas and bring new developers into the fold.

How do I become a tech reader?

If you’re already friends with an author, then I would suggest telling them that you would like to be a tech reader. Most of us keep a list on hand for when our editor inevitably asks us to gather some people (I know I do).

If you see a booth for a publisher at a conference, talk to the people staffing it. I assure you, these people are not interns. They’re usually editors, authors, and community managers, and if they don’t know of a project you can help on right now, they can pass your information on to someone who does.

Finally, if you don’t go to conferences and don’t know any active authors, try Twitter. Every major publishing company has a dozen contact emails you can try, but I’ve found the people manning the Twitter accounts to be the most responsive. Most will follow you back so you can have a private conversation.

Just… don’t do what I did and complain about the quality of tech books. It worked for me, but you should probably start off with politeness rather than being a grouchy cuss.

Categories: FLOSS Project Planets

Appnovation Technologies: Different Point of Views

Planet Drupal - Mon, 2014-08-18 10:25

The Drupal Views module is an amazing tool. It certainly has contributed significantly to the widespread adoption of Drupal.

var switchTo5x = false;stLight.options({"publisher":"dr-75626d0b-d9b4-2fdb-6d29-1a20f61d683"});
Categories: FLOSS Project Planets

Gábor Hojtsy: Moving Drupal forward at Europe's biggest warm water lake

Planet Drupal - Mon, 2014-08-18 10:08

Drupalaton 2014 was amazing. I got involved pretty late in the organization when we added sprinting capacity on all four days, but I must say doing that was well worth it. While the pre-planned schedule of the event focused on longer full day and half day workshops on business English, automation, rules, commerce, multilingual, etc. the sprint was thriving with backend developer luminaries such as Wim Leers, dawehner, fago, swentel, pfrennsen, dasjo as well as sizable frontend crew such as mortendk, lewisnyman, rteijeiro, emmamaria, etc. This setup allowed us to work on a very wide range of issues.

The list of 70+ issues we worked on shows our work on the drupal.org infrastructure, numerous frontend issues to clean up Drupal's markup, important performance problems, several release critical issues and significant work on all three non-postponed beta blockers at the time.


Drupalers "shipped" from port to port; Photo by TCPhoto

Our coordinated timing with the TCDrupal sprints really helped in working on some of the same issues together. We successfully closed one of the beta blockers shortly after the sprint thanks to coordinated efforts between the two events.

Our list of issues also shows the success of the Rules training on the first day in bringing new people in to porting Rules components, as well as work on other important contributed modules: fixing issues with the Git deploy module's Drupal 8 port and work on the Drupal 8 version of CAPTCHA.

Thanks to the organizers, the sponsors of the event including the Drupal Association Community Cultivation Grants program for enabling us to have some of the most important Drupal developers work together on pressing issues, eat healthy and have fun on the way.

Ps. There is never a lack of opportunity to work with these amazing people. Several days of sprints are coming up around DrupalCon Amsterdam in a little over a month! The weekend sprint locations before/after the DrupalCon days are also really cool! See you there!

Categories: FLOSS Project Planets

Acquia: Drupal Stories Kick Off: My Own Drupal Story

Planet Drupal - Mon, 2014-08-18 09:55

It’s no secret that Drupalists are in high demand. I’ve blogged about the need for training more Drupalers and getting to them earlier in their careers previously, but that’s just one aspect of the greater topic which merits a closer inspection as a cohesive whole.

Categories: FLOSS Project Planets

godel.com.au: Use Behat to track down PHP notices before they take over your Drupal site forever

Planet Drupal - Mon, 2014-08-18 09:15
Mon August 18, 2014 Use Behat to track down PHP notices before they take over your Drupal site forever

Behat is one of the more popular testing frameworks in the Drupal community at the moment, for various reasons. One of these reasons is the useful Behat Drupal Extension that provides a DrupalContext class that can be extended to get a lot of Drupal specific functionality in your FeatureContext right off the bat.

In this post, I'm going to show you how to make Behat aware of any PHP errors that are logged to the watchdog table during each scenario that it runs. In Behat's default setup, a notice or warning level PHP error will not usually break site functionality and so won't fail any tests. Generally though, we want to squash every bug we know about during our QA phase so it would be great to fail any tests that incidentally throw errors along the way.

The main benefits of this technique are:

  • No need to write extra step definitions or modify existing steps, but you'll get some small degree of coverage for all functionality that just happens to be on the same page as whatever you are writing tests for
  • Very simple to implement once you have a working Behat setup with the DrupalContext class and Drupal API driver
  • PHP errors are usually very easy to cleanup if you notice them immediately after introducing them, but not necessarily 6 months later. This is probably the easiest way I've found to nip them in the bud, especially when upgrading contrib modules between minor versions (where it's quite common to find new PHP notices being introduced).
The setup

Once you've configured the Drupal extension for Behat, and set the api_driver to drupal in your behat.yml file, you can use Drupal API functions directly inside your FeatureContext.php file (inside your step definitions).

Conceptually, what we're trying to achieve is pretty straightforward. We want to flush the watchdog table before we run any tests and then fail any scenario that has resulted in one or more PHP messages logged by the end of it. It's also important that we give ourselves enough debugging information to track down errors that we detect. Luckily, watchdog already keeps serlialized PHP error debug information serialized by default, so we can unserlialize what we need and print it straight to the console as required.

You will need to write a custom FeatureContext class extending DrupalContext with hooks for @BeforeSuite and @AfterScenario.

Your @BeforeSuite should look something like this:

<?php /** * @BeforeSuite */ public static function prepare(SuiteEvent $event) { // Clear out anything that might be in the watchdog table from god knows // where. db_truncate('watchdog')->execute(); }

And your corresponding @AfterScenario would look like this:

<?php /** * Run after every scenario. */ public function afterScenario($event) { $log = db_select('watchdog', 'w') ->fields('w') ->condition('w.type', 'php', '=') ->execute() ->fetchAll(); if (!empty($log)) { foreach ($log as $error) { // Make the substitutions easier to read in the log. $error->variables = unserialize($error->variables); print_r($error); } throw new \Exception('PHP errors logged to watchdog in this scenario.'); } }

My apologies, I know this code is a little rough, I'm just using print_r() to spit out the data I'm interested in without even bothering to process the Drupal variable substitutions through format_string(), but hey, it's still legible enough for the average PHP developer and it totally works! Maybe someone else will see this, be inspired, and share a nicer version back here...

David MeisterDirector & lead developerDave is one of the two directors of Godel. He is also our best developer. Dave spends his time improving processes, researching new and shiny techniques and generally working on making Godel the best it can be. Want to work with us?

If you have a project that requires a creative but practical approach...

Get in touch Turn your emails in to actions with ActiveInbox Thu July 31, 2014 Harness email hell with ActiveInbox, which turns your Gmail in to actionable tasks and helps you remind yourself to do the things you said you would.
Categories: FLOSS Project Planets

How else to help out

Planet KDE - Mon, 2014-08-18 08:09
Yesterday I blogged about how to help testing. Today, let me share how you can facilitate development in other ways. First of all - you can enable testers!

Help testersAs I mentioned, openSUSE moved to a rolling release of Factory to facilitate testing. KDE software has development snapshots for a few distributions. ownCloud is actually looking for some help with packaging - if you're interested, ping dragotin or danimo on the owncloud-client-dev IRC channel on freenode (web interface for IRC here). Thanks to everybody helping developers with this!

KDE developers hacking in the mountains of Switzerland
CodingOf course, there is code. Almost all projects I know have developer documentation. ownCloud has the developer manual and the KDE community is writing nothing less than a book about writing software for KDE!

Of course - if you want to get into coding ownCloud, you can join us at the ownCloud Contributor Conference in in two weeks in Berlin and KDE has Akademy coming just two weeks later!

And moreNot everybody has the skills to integrate zsync in ownCloud to make it only upload changes to files or to juggle complicated API's in search for better performance in Plasma but there is plenty more you can do. Here is a KDE call for promo help as well as KDE's generic get involved page. ownCloud also features a list of what you can do to help and so does openSUSE.
Or donate...If you don't have the time to help, there is still something: donate to support development. KDE has a page asking for donations and spends the donations mostly on organizing developer events. For example, right now, planet KDE is full of posts about Randa. Your donation makes a difference!

You can support ownCloud feature development on bountysource, where you can even put money on a specific feature you want. This provides no guarantees - a feature can easily cost tens to hundreds of hours to implement, so multiple people will have to support a feature. But your support can help a developer spend time on this feature instead of working for a client and still be able to put food on the table at home.

So, there are plenty of ways in which you can help to get the features and improvements you want. Open Source software might be available for free, but its development still costs resources - and without your help, it won't happen.
Categories: FLOSS Project Planets

Junichi Uekawa: sigaction bit me.

Planet Debian - Mon, 2014-08-18 07:34
sigaction bit me. There's a system call and a libc function of the similar (sigaction vs rt_sigaction) name but they behave differently.

Categories: FLOSS Project Planets

Openmeetings Team: Commercial Openmeetings Support FAQ

Planet Apache - Mon, 2014-08-18 07:06

That's our attempt to keep support knowledge structured and save your time asking questions.

Contents Pricing

That's free software under Apache License, and what is even better, you can get anything for free. You can install Openmeetings yourself for free, or can get free support at openmeetings-user@incubator.apache.org.

Still you may want to save your time by asking us (support-om@dataved.ru) for assistance. The price is calculated by multiplying required hours to hourly rate (€50 / hour).

ServiceHoursInitial server & network check1Stress server & network check2Installation or upgrade of a supported system10Configuration of the integrated room hosting2Integrated room hosting (per month)1Moodle installation3Moodle plug-in installation2Upgrade of security certificates2Site migration12Simple re-branding2Admin access to the demo server1Customizations?

Supported systems include Openmeetings, a number of CMS and integration plug-ins.

Installation requires you to answer a number of questions (see Openmeetings installation questions, CMS plug-in installation questions), hence we can set up the system as you like.

Why admin access to the demo server costs something?

We need to verify the users who get access to the sensitive data. Payment is the simplest verification. Please note, you get a limited time frame to use your admin access.

Do you offer hosting packages?

Yes, we do.

Customization Is it possible to customize Openmeetings' look & feel?

Yes, that's possible. That's the most visible advantage of Openmeetings open source nature.

Is it possible to change several things at once?

Unless you are a trusted long-term customer, we start from small agile projects containing minor customizations. That would save your money, because we address issues in order which is critical for your business.

Why don't you provide estimates for the whole project?

Again, unless you are trusted long-term customer, we cannot just get estimates from your text descriptions without understanding your business and expectations. The same installation on the Russian market costs 5 times more because we work here for large enterprises who require extra security, reliability and training. Small consulting companies get affordable prices because they usually don't want us to configure their internal VPNs and routers (which they don't actally have) as a part of the installation services.

What happens with code which is developed during commercial support?

The general changes which are useful for the project (e.g. bug fixes or general new features) are developed under Apache license and committed into the open source trunk. This helps customers update to a newer version smoothly.

There may be some exceptions. For example, for specific customizations we maintain a private source control system for your project, and this costs extra.

Integration

Which CMS integration modules do you provide?

We provide Drupal, Wordpress, Joomla, Alfresco, Typo3 CMS modules (exact supported version numbers can be clarified in each particular case). We also provide integration to SugarCRM, Zimbra and some other popular systems.

Could you please send us a module for CMS (content management system) integration?

We don't provide these modules without installation services. It requires collecting integration requirements first to succeed in the integration process, and we cannot afford a failure.

Could you please send me a demo link of Openmeetings integration example?

Here is a list of available demos.

We do not provide our clients with the admin account on our demo servers. If you want to try how it works with administrative credentials, please write us and we will send you login information. Please check the Pricing section for more details.

We can install demos for you.

Is it possible to integrate Open Meetings with Microsoft .Net (JBoss, etc)?

Openmeetings integrates with other applications by means of language independent SOAP protocol. We can integrate Openmeetings with any Internet application.

Does Openmeetings work on mobile devices?

This feature is considered experimental. We can integrate your favorite SIP phone like Linphone with Openmeetings via a gateway or re-compile existing Flash client for your device. Flash support for devices varies, hence for the latter option we cannot always guarantee a project succeess.

TechnologyWhich tools & technology are behind Openmeetings?

The server side is written in Java, the client side uses OpenLaszlo, Flash and Java.

What can I do about echo?

Use headphones and manually mute microphones, or try speakerphones.

I have got a browser crash or my client hangs. What can I do?

Please get more info about the browser craches resolving here.

ProcessWhy it takes so many hours?

Don't hesitate to ask if the task estimate is more than you expect. We strive to make our process transparent.

The commercial development cycle contains the following stages: understand what should be done > create a tracker and transfer the task to a programmer > fix > compile > test > commit to the source control > verify the change with the second pair of eyes (ask another guy to complile and deploy the new source to the test server) > deploy to the production server. Here the most important part comes, you get a week to verify the changes yourself.

Is it possible to talk with somebody from commercial support team personally?

If you need a demo account or would like to talk to us personally we can set up a meeting in the room on our demo server. Just discuss the details and what time fits you by e-mail.

We insist on using OpenMeetings for such meetings. This would get you some experience of using OpenMeetings and help to understand if OpenMeetings meet your expectation or not. That is why phone and skype calls are undesirable for us.

Installation Questionnaire

Please, answer the following questions to ensure proper integration.

  • Have you tried a demo?
  • How many conference rooms do you need? Do you need rooms for webinars, or for face-to-face talks, or both? Which resolution do you like to have? How would you name rooms?
  • What kind of server do you have? You need a dedicated server with minimum 2-4 GB Ram and 2-3 Dual Core or Quad core CPU. Recommended operating systems are Ubuntu or Debian. Please transfer us administrator credentials for your server for remote access.
  • Do users and server have enough bandwidth and network quality is sufficient? You generally need 1 Mbit/sec for 4 users in 200x200px resolution.
  • Which hardware do users plan to use?
  • Are there any users who will use the system on a regular basis and benefit from the warming-up training?
  • Are the server or users protected by a firewall? Does your server have 80, 5080, 1935, 8088 ports open? Does the firewall limit RTMP or HTTP traffic?
  • How many users do you plan to have? For high load solutions do you have any production-grade level database installed? Please sents us administrator credentials or credentials of a user who can create and mange openmeetings database.
  • Do you want to have email notifications? If yes, please send us smtp server host and port, openmeetings server email address for outcoming correspondense, and credentials of smtp server user which can send emails from this address.
  • Which timezone, language and contry are default for most of your users?
  • Do you want to close open registration on your Openmeetings site?
  • May we add the link to your site as an example of successful integration to the end of this page?
  • What other requirements do you have in mind?
Installation Check

Working Openmeetings is an important prerequisite for rebranding and integration services.

  • Check that you can hear and see other participants.
  • Ensure recordings work for your installation.
  • Ensure you can successfully put word documents to the whiteboard.
Rebranding Checklist

This checklist ensures you've made steps required for the product rebranding.

  • Check that Openmeetings is installed correctly.
  • Provide Openmeetings server and remote server access credentials.
  • Provide a logo (40 pixel height).
  • Provide your company name, the sting for the conference URL (so-called context), the browser window title.
  • Specify company style colors (light background, window border color).
Integration Questionnaire

Please, answer the following questions to ensure proper integration.

  • Do you have Openmeetings correctly?
  • Which system do you want to integrate with? Which version?
  • As for integrating systems, which servers are they located at? Please provide us with administrator credentials for both systems for remote access.
  • Where on the website shall the links to the conference rooms be displayed?
  • Which rooms would be visible on site?
  • What happens with the recordings user make in the conference room? Shall the user be able to place a link to a recording he made?
  • May we add the link to your site as an example of successful integration to the end of this page?
Business Edition

We do offer a Business Edition of Openmeetings. It is compiled from the same sources, the difference is in configuration service. The service includes SIP integration to selected VoIP providers. If you opt for SIP integration, your users can start using mobile devices via SIP gateway with Openmeetings.

There is no fixed price for this edition. The required effort billing is based on standard hourly rate and depends on complexity of client network infrastructure and number of SIP providers.

Guarantees and commitments

Please take into account that you don’t buy a software product as OpenMeetings itself is free. You hire our developers for some time. Particularly, this means that we don’t ensure fixing bugs in OpenMeetings if you don’t pay for them according to our usual rate.

Good things here are that:

  • Usually we install release version for clients, it’s always good tested and stable enough.
  • Sometime we fix critical problems and make security updates for free, but all such cases should be considered individually, this is not a common rule.

To be sure that OpenMeetings is what you really need you should try our demo server before we start a project. Our installations provide exactly the functionality as demo servers do. So if you cannot get desired quality on demo, most probably you would not get it on your own installation too. Especially this is true regarding the quality of sound, video, recordings and screen sharing.

We are not responsible for the client-side problems. If your users don’t have enough bandwidth or RAM on their workstations, we cannot resolve such problems. Again, try the demo server with your equipment first to make a decision.

Unless this is separately discussed you have one week to verify if the installed system meets your expectations, and during this period we will help resolving issues you face. In case of the hosting service this week is included in the first hosting period.

We cannot offer a refund for hours which have been already spent on your project or payments for the hosting services.

Categories: FLOSS Project Planets

Ian Ozsvald: Python Training courses: Data Science and High Performance Python coming in October

Planet Python - Mon, 2014-08-18 07:05

I’m pleased to say that via our ModelInsight we’ll be running two Python-focused training courses in October. The goal is to give you new strong research & development skills, they’re aimed at folks in companies but would suit folks in academia too. UPDATE training courses ready to buy (1 Day Data Science, 2 Day High Performance).

UPDATE we have a <5min anonymous survey which helps us learn your needs for Data Science training in London, please click through and answer the few questions so we know what training you need.

“Highly recommended – I attended in Aalborg in May “:… upcoming Python DataSci/HighPerf training courses”” @ThomasArildsen

These and future courses will be announced on our London Python Data Science Training mailing list, sign-up for occasional announces about our upcoming courses (no spam, just occasional updates, you can unsubscribe at any time).

Intro to Data science with Python (1 day) on Friday 24th October

Students: Basic to Intermediate Pythonistas (you can already write scripts and you have some basic matrix experience)

Goal: Solve a complete data science problem (building a working and deployable recommendation engine) by working through the entire process – using numpy and pandas, applying test driven development, visualising the problem, deploying a tiny web application that serves the results (great for when you’re back with your team!)

  • learn basic numpy, pandas and data cleaning
  • be confident with Test Driven Development and debugging strategies
  • create a recommender system and understand its strengths and limitations
  • use a Flask API to serve results
  • learn Anaconda and conda environments
  • take home a working recommender system that you can confidently customise to your data
  • £300 including lunch, central London, two trainers (24th October)
  • additional announces will come via our London Python Data Science Training mailing list
  • Buy your ticket here
High Performance Python (2 day) on Thursday+Friday 30th+31st October

Students: Intermediate Pythonistas (you need higher performance for your Python code)

Goal: learn high performance techniques for performant computing, a mix of background theory and lots of hands-on pragmatic exercises

  • Profiling (CPU, RAM) to understand bottlenecks
  • Compilers and JITs (Cython, Numba, Pythran, PyPy) to pragmatically run code faster
  • Learn r&d and engineering approaches to efficient development
  • Multicore and clusters (multiprocessing, IPython parallel) for scaling
  • Debugging strategies, numpy techniques, lowering memory usage, storage engines
  • Learn Anaconda and conda environments
  • Take home years of hard-won experience so you can develop performant Python code
  • Cost: £600 including lunch, central London, two trainers (30th & 31st October)
  • additional announces will come via our London Python Data Science Training mailing list
  • Buy your ticket here

The High Performance course is built off of many years teaching and talking at conferences (including PyDataLondon 2013, PyCon 2013, EuroSciPy 2012) and in companies along with my High Performance Python book (O’Reilly). The data science course is built off of techniques we’ve used over the last few years to help clients solve data science problems. Both courses are very pragmatic, hands-on and will leave you with new skills that have been battle-tested by us (we use these approaches to quickly deliver correct and valuable data science solutions for our clients via ModelInsight). At PyCon 2012 my students rated me 4.64/5.0 for overall happiness with my High Performance teaching.

@ianozsvald [..] Best tutorial of the 4 I attended was yours. Thanks for your time and preparation!” @cgoering

We’d also like to know which other courses you’d like to learn, we can partner with trainers as needed to deliver new courses in London. We’re focused around Python, data science, high performance and pragmatic engineering. Drop me an email (via ModelInsight) and let me know if we can help.

Do please join our London Python Data Science Training mailing list to be kept informed about upcoming training courses.

Ian applies Data Science as an AI/Data Scientist for companies in ModelInsight, sign-up for Data Science tutorials in London. Historically Ian ran Mor Consulting. He also founded the image and text annotation API Annotate.io, co-authored SocialTies, programs Python, authored The Screencasting Handbook, lives in London and is a consumer of fine coffees.
Categories: FLOSS Project Planets

Understanding Icons: Participate in our 3rd survey

Planet KDE - Mon, 2014-08-18 04:46

Our little journey through different icon sets continues. Please participate in our little game and help us to learn more about the usability of icon design.

Keep on reading: Understanding Icons: Participate in our 3rd survey

Categories: FLOSS Project Planets

what is "the desktop": KDE and laptops

Planet KDE - Mon, 2014-08-18 03:27

The first entry in this miniseries describe the goal of the argument I was about to make: a definition shift in what KDE perceives as "the desktop" and thereby the computers defined as its common target.

The common response by many KDE people is to respond with what KDE is now as if that is somehow immutable. On one mailing list I recently read how "nearly all apps are written for laptops/desktops" (paraphrased to protect the innocent) when another person (not me) suggested we ought to look to a broader array of device types as targets. That's how it is, that's how it shall always be?

No, and to understand why let's remember our own history.
In the beginning...When KDE got started development happened on desktop computers, the kind you stowed away under your desk and which were physically connected to a wall at all times for power. Cables snaked from the big computer box to the monitor, keyboard, mouse and the network. Battery backup was an add-on few had and Wifi was not part of the common landscape.
At early KDE events where laptops were needed for participation, people who did not have laptops were encouraged to say so and the organizers would arrange for a system to be available for them.
In fact, at early KDE events it was typical to have a hacking area with desktop computers sitting ready on desks for people to jump on and use!Rise of the laptopLaptops did start appearing and Wifi became the norm. (Though often we still used wired networks for distribution compilation.) At first laptop support pretty much sucked, let's be honest; but changes did start to appear.
KWin got a "laptop decoration" that took less vertical space. I used it, in fact, for that very reason on my laptop with it's amazing (iirc) 800x600 resolution. A touchpad configuration panel appeared. 
And so it was that over time we got all the things needed to make a laptop experience great in a KDE desktop: dynamic power management, a proper hotplug UI, display brightness OSD, bluetooth configuration, a network management UI so critical for non-wired devices constantly on the move, ...
Very little of that is at all of interest to the desktop systems KDE originally targeted. KDE adjusted its definition of the desktop to include laptops. Today we all take that for granted and of course when we say "desktop" we also implicitly mean "laptop" too.Did KDE abandon the "real" desktop?In all this, did KDE leave behind those boat-anchors stuffed under the desk? Not at all.
Let's take Plasma as an example. When creating the initial desktop layout, it actually checks to see if the machine is battery powered (such as a laptop) or not; when it isn't (because it's a traditional desktop system) the battery widget is not added to the panel. Additionally, graphics features used by components such as KWin are tested (often by participating users) on desktop GPUs and where there are problems fixes are made.
The many features Plasma sports that are largely useful for laptops simply don't interfere with usage on a true desktop computer. Expanding the definition of "desktop" to "laptop" did not result in a poor experience.
In fact, with laptops as a focus more people rather than fewer use multiple screen setups, so multiscreen matters to more people now than it did when I was the sole embattled maintainer of multiscreen for Kicker way back in KDE Desktop 3.x times. Expanding the definition to "laptop" actually led to improvement in this area for true desktop systems with multiple screens.KDE is strong enough to growBy looking at KDE's own history, we learn is that the definition of "the desktop" is not static. It has changed, it can change and KDE is fully capable of riding those waves quite successfully. As we consider what "the desktop" is today, then, let's all put the "KDE is..." thinking behind us and instead focus on "KDE should be..." with confidence. Whatever the desktop has become or might become, KDE is capable of taking that on.
In the next blog entry we'll (finally) start looking at what "the desktop" is evolving towards.

Categories: FLOSS Project Planets

Deeson Online: Using Grunt, bootstrap, Compass and SASS in a Drupal sub theme

Planet Drupal - Mon, 2014-08-18 02:37
*/

If you have a separate front end design team from your Drupal developers, you will know that after static pages are moved into a Drupal theme there can be a huge gap in structure between the original files and the final Drupal site.

We wanted to bridge the gap between our theme developers, UX designers, front end coders, and create an all encompassing boilerplate that could be used as a starting point for any project and then easily ported into Drupal.

After thinking about this task for a few weeks it was clear that the best way forward was to use Grunt to automate all of our tasks and create a scalable, well structured sub theme that all of our coders can use to start any project.

What is Grunt?

Grunt is a Javascript task runner that allows you to automate repetitive tasks such as file minifying files, javascript linting, CSS preprocessing, and even reloading your browser.

Just like bootstrap, there are many resources and a vast amount of plugins available for Grunt that can automate any task you could think of, plus it is very easy to write your own, so setting Grunt as a standard for our boilerplate was an easy decision.

The purpose of this post

We use bootstrap in most projects and recently switched to using SASS for CSS preprocessing bundled with Compass, so for the purpose of this tutorial we will create a simple bootstrap sub theme that utilises Grunt & Compass to compile SASS files and automatically reloads our browser every time a file is changed.

You can then take this approach and use the best Grunt plugins that suit your project.

Step 1. Prerequisites

To use Grunt you will need node.js and ruby installed on your system. Open up terminal, and type:

node -v ruby -v

If you don't see a version number, head to the links below to download and install them.

Don’t have node? Download it here

Don’t have ruby? Follow this great tutorial

Step 2. Installing Grunt

Open up terminal, and type:

sudo npm install -g grunt-cli

This will install the command line interface for Grunt. Be patient whilst it is downloading as sometimes it can take a minute or two.

Step 3. Installing Compass and Grunt plugins

Because we want to use the fantastic set of mixins and features bundled with Compass, lets install the Compass and SASS ruby gems.

Open up terminal, and type:

sudo gem install sass sudo gem install compass

For our boilerplate we only wanted to install plugins that we would need in every project, so we kept it simple and limited it to Watch, Compass and SASS to compile all of our files. Our team members can then add extra plugins later in the project as and when needed.

So lets get started and use the node package manager to install our Grunt plugins.

Switch back to Terminal and run the following commands:

sudo npm install grunt-contrib-watch —save-dev sudo npm install grunt-contrib-compass —save-dev sudo npm install grunt-contrib-sass —save-dev Step 4. Creating the boilerplate

Note: For the purposes of this tutorial we are going to use the bootstrap sub theme for our Grunt setup, but the same Grunt setup described below can be used with any Drupal sub theme.

  • Create a new Drupal site
  • Download the bootstrap theme into your sites/all/themes directory
    drush dl bootstrap
  • Copy the bootstrap starter kit (sites/all/themes/bootstrap/bootstrap_subtheme) into your theme directory
  • Rename bootstrap_subtheme.info.starterkit to bootstrap_subtheme.info
  • Navigate to admin/appearance and click “Enable, and set default" for your sub-theme.

Your Drupal site should now be setup with Bootstrap and your folder structure should now look like this:

For more information on creating a bootstrap sub theme check out the community documentation.

Step 5. Switching from LESS to SASS

Our developers liked less, our designers likes SASS, but after a team tech talk explaining the benefits of using SASS with Compass (a collection of mixins with an updater with some cleaver sprite creation), everyone agreed that SASS was the way forward.

Officially Bootstrap is now packaged with SASS, so lets replace our .less files with .scss files in our bootstrap_subtheme so we can utilise all of the mixin goodness that comes with it SASS & Compass.

  • Head over to bootstrap and download the SASS version
  • Copy the stylesheets folder from boostrap-sass/assets/ and paste it into your bootstrap_subtheme
  • Rename the stylesheets folder to bootstrap-sass
  • Create a new folder called custom-sass in bootsrap_subtheme
  • Create a new file in the custom-sass called style.scss
  • Import bootstrap-sass/bootstrap.scss into style.scss

​You should now have the following setup in your sub theme:

We are all set!

Step 6. Setting up Grunt - The package.json & Gruntfile.js

Now lets configure Grunt to run our tasks. Grunt only needs two files to be setup, a package.json file that defines our dependencies and a Gruntfiles.js to configure our plugins.

Within bootstrap_subtheme, create a package.json and add the following code:

{ "name": "bootstrap_subtheme", "version": "1.0.0", "author": “Your Name", "homepage": "http://homepage.com", "engines": { "node": ">= 0.8.0" }, "devDependencies": { "grunt-contrib-compass": "v0.9.0", "grunt-contrib-sass": "v0.7.3", "grunt-contrib-watch": "v0.6.1" } }

In this file you can add whichever plugins are best suited for your project, check out the full list of plugins at the official Grunt site.

Install Grunt dependencies

Next, open up terminal, cd into sites/all/themes/bootstrap_subtheme, and run the following task:

sudo npm install

This command looks through your package.json file and installs the plugins listed. You only have to run this command once when you set up a new Grunt project, or when you add a new plugin to package.json.

Once you run this you will notice a new folder in your bootstrap_subtheme called node_modules which stores all of your plugins. If you are using git or SVN in your project, make sure to ignore this folder.

Now lets configure Grunt to use our plugins and automate some tasks. Within bootstrap_subtheme, create a Gruntfile.js file and add the following code:

module.exports = function (grunt) { grunt.initConfig({ watch: { src: { files: [‘**/*.scss', '**/*.php'], tasks: ['compass:dev'] }, options: { livereload: true, }, }, compass: { dev: { options: { sassDir: 'custom-sass/scss', cssDir: 'css', imagesPath: 'assets/img', noLineComments: false, outputStyle: 'compressed' } } } }); grunt.loadNpmTasks('grunt-contrib-compass'); grunt.loadNpmTasks('grunt-contrib-sass'); grunt.loadNpmTasks('grunt-contrib-watch'); };

This file is pretty straight forward, we configure our watch tasks to look for certain files and reload our browser, and then we define our scss and css directories so that compass knows where to look.

I won’t go into full detail with the options available, but visit the links below to see the documentation:

Watch documentatation

SASS documentatation

 

Step 7. Enabling live reload

Download and enable the livereload module into your new Drupal site. By default, you will have to be logged in as admin for live reload to take effect, but you can change this under Drupal permissions.

Once you enable livereload, refresh your browser window to load the livereload.js library.

Step 8. Running Grunt

We are all set! Head back over to Terminal and check you are in the bootstrap_subtheme directory, then type:

grunt watch

Now every time you edit a scss file, Grunt will compile your SASS into a compressed style.css file and automatically reload your browser.

Give it a go by importing compass into the top of your style folder and changing the body background to be a compass mixin.

@import 'compass'; @import '../bootstrap-sass/bootstrap.scss'; /* * Custom overrides */ body { @include background(linear-gradient(#eee, #fff)); }

To stop Grunt from watching your files, press Ctrl and C simultaneously on your keyboard.

Step 9. Debugging

One common problem you may encounter when using Grunt alongside live reload is the following error message:

Fatal error: Port 35729 is already in use by another process.

This means that the port being used by live reload is currently in use by another process, either by a different grunt project, or an application such as Chrome.

If you experience this problem run the following command and find out which application is using the port.

lsof | grep 35729

Simply close the application and run “grunt watch” again. If the error still persists and all else fails, restart your machine and try to stop Grunt from watching files before moving on to another project.

Next steps…

This is just a starting point on what you can achieve using Grunt to automate your tasks and gives you a quick insight to how we go about starting a project.

Other things to consider:

  • Duplicating the _variables.scss bootstrap file to override the default settings.
  • Adding linted, minified javascript files using the uglify plugin
  • Configure Grunt to automatically validate your markup using the W3C Markup Validator
  • Write your own Grunt plugins to suite your own projects
Let me know your thoughts - you can share your ideas and views in the comments below.

 

Read moreUsing Grunt, bootstrap, Compass and SASS in a Drupal sub themeBy David Allard | 18th August 2014
Categories: FLOSS Project Planets

Montreal Python User Group: August organisation Meeting

Planet Python - Mon, 2014-08-18 00:00

The summer is slowly ending and it's time for us to plan our next season. The Montreal-Python's team will then meet next Wednesday, August 27th to organise and talk about what we would like to do this fall.

If you have ideas, if you would like to give a hand, please come join us !

Where

The meeting will be held at the Ajah offices at 1124 Marie-Est suite 11 (https://goo.gl/maps/74aWY)

When

Wednesday Auguest 27 at 7:00 pm

Schedule and Plan
  • Opening and return on the summer and spring seasons
  • MP48
  • Project Nights
  • PyCon 2015
  • Varia

See you there and if you have any comments or question, please don't hesitate to write to us at: mtlpyteam@googlegroups.com

Categories: FLOSS Project Planets

[KDEConnect] Report at the end of GSOC and expectations

Planet KDE - Sun, 2014-08-17 20:00

First of all, I would like to express my gratitude to my mentors Mr. Àlex Fiestas and Mr. Albert Vaca,  and kde hackers Mr. Daniel Vrátil, Mr. Sergio Martins, Mr. Kevin Krammer  etc. Also, thanks for Chinese GDG who supported us to hold a meet-up at Beijing, introducing our projects, discuss and make some noise to get more students involved.

 

For a brief resume, I’ve accomplished the basic backend, ui for iOS platform , and plugins : Ping, MPRIS, Photo Sharing , Clipboard Sync, Mouspad, Battery report, which have been already implemented on Android version.

Besides, I’ve also implemented new plugins for the synchronization of calendar events, reminders, and contacts, using Akonadi to manage these resources.

During this GSOC, I’ve learned a lot on iOS development, and the development on kde shows me the art of its structures and design . For an experience-less like me, it is rather complicated but really fascinated . I’ve spent a lot of time on tests , trying to figure out the correct way to manipulate the resources as I wanted, but I’ve always got unexpected bugs or misunderstood of the way Akonadi works. Thanks for mailing list and patient hackers on IRC, I’ve finally managed to comprehend and make it work. That makes me realize the importance of communication. One should have talk with other experienced people even if it’s a stupid the question. That would save a lot of  time and let people get more ideas .

Besides, I’ve also encountered many wired problems but finally resolved by stack overflow and google search, such as: the public key generated by iOS api miss some important info at the head, and that would make the public key shorter and unrecognizable for others. The address book apis on iOS are not as well wrapped as on OSX, and it doesn’t provide an UID for every contact,that would make it complicated when trying to identify a contact with UID. There’s no parser for calendars in icalendar format, so I’ve got to use the c library libical for parsing and create the icalendar format string manually (it works not perfect, maybe we can find a better way or to create a completed library for icalendar generator and parser). A two finger tap guesture will always trigger a single tap guesture, which won’t hurt but I’m still thinking of how can we create a better self-defined guesture based on iOS guesture recognizer without triggering the wrong one.

 

I was planning to take the advantage of libimobiledevice to bring the useful tools on UI or maybe create some new tools. But I’ve found that the library is a bit complicated for me. I’ll need more work to I’ve just given a try to wrap up some basic tool such as : get connected device list, get device infos, get realtime syslog etc. As the time for GSOC is limited,  I didn’t managed to make some real progress on it and these features are not so useful for users. That is one of my expectations since no iTunes provided on linux.

I would also like to see that a BlueTooth 4.0 – Low Energy connection would be supported with a new backend. Through BTLE, we would be able to save a lot on battery and we would also be able to get more cool things from iOS since the notifications are only allowed to be retrieved by BTLE devices (These apis are designed only for smart watches or other devices). I was also told that kdeconnect would be disabled under the restriction of some school/ company wifi . So it’s cool if we can support a BT connection.

For the improvements, I’m working with Alexi Pol for a plugin loader which will load only the supported plugins instead of all plugins, which iss being tested. The payload transfer should be encrypted and I would love to add more gestures for the touch pad plugin.

As the iOS 8 is coming, we are sure to take advantage of new apis , such as : add kdeconnect into the costume share sheet so that we can share a photo/ url  outside the app.

At the end, I would like to express my gratitude again to everyone who helped me and who are following this GREAT project! This is my first time making contribution to an open source community and I’ve really enjoyed it. I would have to say, It’s a good begin for me and I’ll keep learning and contributing    :)

Categories: FLOSS Project Planets

James Duncan: Stefan Sagmeister says you’re not a storyteller

Planet Apache - Sun, 2014-08-17 18:00

Designer Stefan Sagemeister takes a critical stance towards the storytelling meme that’s so very popular right now. I think he’s overreaching, and possibly doing so to make a point, but it does seems like the label has been adopted by the asshats who eventually show up and ruin every good metaphor. In any case, it’s a provoking short piece.

via A Photo Editor via permalink
Categories: FLOSS Project Planets

James Duncan: The Internet’s original sin

Planet Apache - Sun, 2014-08-17 18:00

Twenty years into the ad-supported web, Ethan Zuckerman argues that it isn’t too late to stop the seemingly inexorable move towards a centralized and heavilly surveilled Internet. I hope he’s right.

via permalink
Categories: FLOSS Project Planets

The Book

Planet KDE - Sun, 2014-08-17 17:02

When inviting to the Randa 2014 meeting, Mario had the idea to write a book about KDE Frameworks. Valorie picked up this idea and kicked off a small team to tackle the task. So in the middle of August, Valorie, Rohan, Mirko, Bruno, and me gathered in a small room under the roof of the Randa house and started to ponder how to accomplish writing a book in the week of the meeting. Three days later and with the help of many others, Valorie showed around the first version of the book on her Kindle at breakfast. Mission accomplished.


Mission accomplished is a bit of an exaggeration, as you might have suspected. While we had a first version of the book, of course there still is a lot to be added, more content, more structure, more beautiful appearance. But we had quickly settled on the idea that the book shouldn't be a one-time effort, but an on-going project, which grows over time, and is continuously updated as the Frameworks develop, and people find the time and energy to contribute content.

So in addition to writing initial content we spend our thoughts and work on setting up an infrastructure, which will support a sustained effort to develop and maintain the book. While there will come more, having the book on the Kindle to show it around indeed was the first part of our mission accomplished.

Content-wise we decided to target beginning and mildly experienced Qt developers, and present the book in some form of cook book, with sections about how to solve specific problems, for example writing file archives, storing configuration, spell-checking, concurrent execution of tasks, or starting to write a new application.



There already is a lot of good content in our API documentation and on techbase.kde.org, so the book is more a remix of existing documentation spiced with a bit of new content to keep the pieces together or to adapt it to the changes between kdelibs 4 and Frameworks 5.

The book lives in a git repository. We will move it to a more official location a bit later. It's a combination of articles written in markdown and compiling code, from which snippets are dragged into the text as examples. A little bit of tooling around pandoc gives us the toolchain and infrastructure to generate the book without much effort. We actually intend to automatically generate current versions with our continuous integration system, whenever something changes.

While some content now is in the book git repository, we intend to maintain the examples and their documentation as close to the Frameworks they describe. So most of the text and code is supposed to live in the same repositories where the code is maintained as well. They are aggregated in the book repository via git submodules.

Comments and contributions are very welcome. If you are maintaining one of the Frameworks or you are otherwise familiar with them, please don't hesitate to let us know, send us patches, or just commit to the git repository.



I had fun writing some example code and tutorials for creating new applications and how to use KPlotting and KConfig. The group was amazing, and after some struggling with tools, network, and settling on what path to take, there was a tremendous amount of energy, which carried us through days and nights of writing and hacking. This is the magic of KDE sprints. There are few places where you can experience this better than in Randa.

Update: Many people are involved with creating the book, and I'm grateful to everybody who is contributing, even if I haven't mentioned you personally. There is one guy I should have mentioned, though, and that is Bruno Friedmann who made the wonderful cover and always is a source of energy and inspiration.

Categories: FLOSS Project Planets
Syndicate content