Feeds

Gunnar Wolf: Guests in the classroom: @chemaserralde talks about real time scheduling

Planet Debian - Wed, 2014-10-29 16:47

Last Wednesday I had the pleasure and honor to have a great guest again at my class: José María Serralde, talking about real time scheduling. I like inviting different people to present interesting topics to my students a couple of times each semester, and I was very happy to have Chema come again.

Chema is a professional musician (formally, a pianist, although he has far more skills than what a title would confer to him — Skills that go way beyond just music), and he had to learn the details on scheduling due to errors that appear when recording and performing.

The audio could use some cleaning, and my main camera (the only one that lasted for the whole duration) was by a long shot not professional grade, but the video works and is IMO quite interesting and well explained.

So, here is the full video (also available at The Internet archive), all two hours and 500MB of it for you to learn and enjoy!

Categories: FLOSS Project Planets

Mediacurrent: 10 Things I Wish I Knew About Drupal 2 Years Ago

Planet Drupal - Wed, 2014-10-29 16:44

They say that hindsight is 20/20. With the many advances that have happened in the Drupal community recently, we asked our team "What is the one thing you wish you knew about Drupal two years ago?" 

"I wish I knew about the Headless Drupal initiative so I that I could be ahead of the curve as far as the Javascript technologies that it will require." - Chris Doherty

Categories: FLOSS Project Planets

Matt Raible: Building a REST API with JAXB, Spring Boot and Spring Data

Planet Apache - Wed, 2014-10-29 16:25

If someone asked you to develop a REST API on the JVM, which frameworks would you use? I was recently tasked with such a project. My client asked me to implement a REST API to ingest requests from a 3rd party. The project entailed consuming XML requests, storing the data in a database, then exposing the data to internal application with a JSON endpoint. Finally, it would allow taking in a JSON request and turning it into an XML request back to the 3rd party.

With the recent release of Apache Camel 2.14 and my success using it, I started by copying my Apache Camel / CXF / Spring Boot project and trimming it down to the bare essentials. I whipped together a simple Hello World service using Camel and Spring MVC. I also integrated Swagger into both. Both implementations were pretty easy to create (sample code), but I decided to use Spring MVC. My reasons were simple: its REST support was more mature, I knew it well, and Spring MVC Test makes it easy to test APIs.

Camel's Swagger support without web.xml
As part of the aforementioned spike, I learned out how to configure Camel's REST and Swagger support using Spring's JavaConfig and no web.xml. I made this into a sample project and put it on GitHub as camel-rest-swagger.

This article shows how I built a REST API with Java 8, Spring Boot/MVC, JAXB and Spring Data (JPA and REST components). I stumbled a few times while developing this project, but figured out how to get over all the hurdles. I hope this helps the team that's now maintaining this project (my last day was Friday) and those that are trying to do something similar.

XML to Java with JAXB

The data we needed to ingest from a 3rd party was based on the NCPDP Standards. As a member, we were able to download a number of XSD files, put them in our project and generate Java classes to handle the incoming/outgoing requests. I used the maven-jaxb2-plugin to generate the Java classes.

<plugin> <groupId>org.jvnet.jaxb2.maven2</groupId> <artifactId>maven-jaxb2-plugin</artifactId> <version>0.8.3</version> <executions> <execution> <goals> <goal>generate</goal> </goals> <configuration> <args> <arg>-XtoString</arg> <arg>-Xequals</arg> <arg>-XhashCode</arg> <arg>-Xcopyable</arg> </args> <plugins> <plugin> <groupId>org.jvnet.jaxb2_commons</groupId> <artifactId>jaxb2-basics</artifactId> <version>0.6.4</version> </plugin> </plugins> <schemaDirectory>src/main/resources/schemas/ncpdp</schemaDirectory> </configuration> </execution> </executions> </plugin>

The first error I ran into was about a property already being defined.

[INFO] --- maven-jaxb2-plugin:0.8.3:generate (default) @ spring-app --- [ERROR] Error while parsing schema(s).Location [ file:/Users/mraible/dev/spring-app/src/main/resources/schemas/ncpdp/structures.xsd{1811,48}]. com.sun.istack.SAXParseException2; systemId: file:/Users/mraible/dev/spring-app/src/main/resources/schemas/ncpdp/structures.xsd; lineNumber: 1811; columnNumber: 48; Property "MultipleTimingModifierAndTimingAndDuration" is already defined. Use <jaxb:property> to resolve this conflict. at com.sun.tools.xjc.ErrorReceiver.error(ErrorReceiver.java:86)

I was able to workaround this by upgrading to maven-jaxb2-plugin version 0.9.1. I created a controller and stubbed out a response with hard-coded data. I confirmed the incoming XML-to-Java marshalling worked by testing with a sample request provided by our 3rd party customer. I started with a curl command, because it was easy to use and could be run by anyone with the file and curl installed.

curl -X POST -H 'Accept: application/xml' -H 'Content-type: application/xml' \ --data-binary @sample-request.xml http://localhost:8080/api/message -v

This is when I ran into another stumbling block: the response wasn't getting marshalled back to XML correctly. After some research, I found out this was caused by the lack of @XmlRootElement annotations on my generated classes. I posted a question to Stack Overflow titled Returning JAXB-generated elements from Spring Boot Controller. After banging my head against the wall for a couple days, I figured out the solution.

I created a bindings.xjb file in the same directory as my schemas. This causes JAXB to generate @XmlRootElement on classes.

<?xml version="1.0"?> <jxb:bindings version="1.0" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:jxb="http://java.sun.com/xml/ns/jaxb" xmlns:xjc="http://java.sun.com/xml/ns/jaxb/xjc" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/jaxb http://java.sun.com/xml/ns/jaxb/bindingschema_2_0.xsd"> <jxb:bindings schemaLocation="transport.xsd" node="/xsd:schema"> <jxb:globalBindings> <xjc:simple/> </jxb:globalBindings> </jxb:bindings> </jxb:bindings>

To add namespaces prefixes to the returned XML, I had to modify the maven-jaxb2-plugin to add a couple arguments.

<arg>-extension</arg> <arg>-Xnamespace-prefix</arg>

And add a dependency:

<dependencies> <dependency> <groupId>org.jvnet.jaxb2_commons</groupId> <artifactId>jaxb2-namespace-prefix</artifactId> <version>1.1</version> </dependency> </dependencies>

Then I modified bindings.xjb to include the package and prefix settings. I also moved <xjc:simple/> into a global setting. I eventually had to add prefixes for all schemas and their packages.

<?xml version="1.0"?> <bindings version="2.0" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="http://java.sun.com/xml/ns/jaxb" xmlns:xjc="http://java.sun.com/xml/ns/jaxb/xjc" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:namespace="http://jaxb2-commons.dev.java.net/namespace-prefix" xsi:schemaLocation="http://java.sun.com/xml/ns/jaxb http://java.sun.com/xml/ns/jaxb/bindingschema_2_0.xsd http://jaxb2-commons.dev.java.net/namespace-prefix http://java.net/projects/jaxb2-commons/sources/svn/content/namespace-prefix/trunk/src/main/resources/prefix-namespace-schema.xsd"> <globalBindings> <xjc:simple/> </globalBindings> <bindings schemaLocation="transport.xsd" node="/xsd:schema"> <schemaBindings> <package name="org.ncpdp.schema.transport"/> </schemaBindings> <bindings> <namespace:prefix name="transport"/> </bindings> </bindings> </bindings>

I learned how to add prefixes from the namespace-prefix plugins page.

Finally, I customized the code-generation process to generate Joda Time's DateTime instead of the default XMLGregorianCalendar. This involved a couple custom XmlAdapters and a couple additional lines in bindings.xjb. You can see the adapters and bindings.xjb with all necessary prefixes in this gist. Nicolas Fränkel's Customize your JAXB bindings was a great resource for making all this work.

I wrote a test to prove that the ingest API worked as desired.

@RunWith(SpringJUnit4ClassRunner.class) @SpringApplicationConfiguration(classes = Application.class) @WebAppConfiguration @DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_CLASS) public class InitiateRequestControllerTest { @Inject private InitiateRequestController controller; private MockMvc mockMvc; @Before public void setup() { MockitoAnnotations.initMocks(this); this.mockMvc = MockMvcBuilders.standaloneSetup(controller).build(); } @Test public void testGetNotAllowedOnMessagesAPI() throws Exception { mockMvc.perform(get("/api/initiate") .accept(MediaType.APPLICATION_XML)) .andExpect(status().isMethodNotAllowed()); } @Test public void testPostPaInitiationRequest() throws Exception { String request = new Scanner(new ClassPathResource("sample-request.xml").getFile()).useDelimiter("\\Z").next(); mockMvc.perform(post("/api/initiate") .accept(MediaType.APPLICATION_XML) .contentType(MediaType.APPLICATION_XML) .content(request)) .andExpect(status().isOk()) .andExpect(content().contentType(MediaType.APPLICATION_XML)) .andExpect(xpath("/Message/Header/To").string("3rdParty")) .andExpect(xpath("/Message/Header/SenderSoftware/SenderSoftwareDeveloper").string("HID")) .andExpect(xpath("/Message/Body/Status/Code").string("010")); } } Spring Data for JPA and REST

With JAXB out of the way, I turned to creating an internal API that could be used by another application. Spring Data was fresh in my mind after reading about it last summer. I created classes for entities I wanted to persist, using Lombok's @Data to reduce boilerplate.

I read the Accessing Data with JPA guide, created a couple repositories and wrote some tests to prove they worked. I ran into an issue trying to persist Joda's DateTime and found Jadira provided a solution.

I added its usertype.core as a dependency to my pom.xml:

<dependency> <groupId>org.jadira.usertype</groupId> <artifactId>usertype.core</artifactId> <version>3.2.0.GA</version> </dependency>

... and annotated DateTime variables accordingly.

@Column(name = "last_modified", nullable = false) @Type(type="org.jadira.usertype.dateandtime.joda.PersistentDateTime") private DateTime lastModified;

With JPA working, I turned to exposing REST endpoints. I used Accessing JPA Data with REST as a guide and was looking at JSON in my browser in a matter of minutes. I was surprised to see a "profile" service listed next to mine, and posted a question to the Spring Boot team. Oliver Gierke provided an excellent answer.

Swagger

Spring MVC's integration for Swagger has greatly improved since I last wrote about it. Now you can enable it with a @EnableSwagger annotation. Below is the SwaggerConfig class I used to configure Swagger and read properties from application.yml.

@Configuration @EnableSwagger public class SwaggerConfig implements EnvironmentAware { public static final String DEFAULT_INCLUDE_PATTERN = "/api/.*"; private RelaxedPropertyResolver propertyResolver; @Override public void setEnvironment(Environment environment) { this.propertyResolver = new RelaxedPropertyResolver(environment, "swagger."); } /** * Swagger Spring MVC configuration */ @Bean public SwaggerSpringMvcPlugin swaggerSpringMvcPlugin(SpringSwaggerConfig springSwaggerConfig) { return new SwaggerSpringMvcPlugin(springSwaggerConfig) .apiInfo(apiInfo()) .genericModelSubstitutes(ResponseEntity.class) .includePatterns(DEFAULT_INCLUDE_PATTERN); } /** * API Info as it appears on the swagger-ui page */ private ApiInfo apiInfo() { return new ApiInfo( propertyResolver.getProperty("title"), propertyResolver.getProperty("description"), propertyResolver.getProperty("termsOfServiceUrl"), propertyResolver.getProperty("contact"), propertyResolver.getProperty("license"), propertyResolver.getProperty("licenseUrl")); } }

After getting Swagger to work, I discovered that endpoints published with @RepositoryRestResource aren't picked up by Swagger. There is an open issue for Spring Data support in the swagger-springmvc project.

Liquibase Integration

I configured this project to use H2 in development and PostgreSQL in production. I used Spring profiles to do this and copied XML/YAML (for Maven and application*.yml files) from a previously created JHipster project.

Next, I needed to create a database. I decided to use Liquibase to create tables, rather than Hibernate's schema-export. I chose Liquibase over Flyway based of discussions in the JHipster project. To use Liquibase with Spring Boot is dead simple: add the following dependency to pom.xml, then place changelog files in src/main/resources/db/changelog.

<dependency> <groupId>org.liquibase</groupId> <artifactId>liquibase-core</artifactId> </dependency>

I started by using Hibernate's schema-export and changing hibernate.ddl-auto to "create-drop" in application-dev.yml. I also commented out the liquibase-core dependency. Then I setup a PostgreSQL database and started the app with "mvn spring-boot:run -Pprod".

I generated the liquibase changelog from an existing schema using the following command (after downloading and installing Liquibase).

liquibase --driver=org.postgresql.Driver --classpath="/Users/mraible/.m2/repository/org/postgresql/postgresql/9.3-1102-jdbc41/postgresql-9.3-1102-jdbc41.jar:/Users/mraible/snakeyaml-1.11.jar" --changeLogFile=/Users/mraible/dev/spring-app/src/main/resources/db/changelog/db.changelog-02.yaml --url="jdbc:postgresql://localhost:5432/mydb" --username=user --password=pass generateChangeLog

I did find one bug - the generateChangeLog command generates too many constraints in version 3.2.2. I was able to fix this by manually editing the generated YAML file.

Tip: If you want to drop all tables in your database to verify Liquibase creation is working in PostgeSQL, run the following commands:

psql -d mydb drop schema public cascade; create schema public;

After writing minimal code for Spring Data and configuring Liquibase to create tables/relationships, I relaxed a bit, documented how everything worked and added a LoggingFilter. The LoggingFilter was handy for viewing API requests and responses.

@Bean public FilterRegistrationBean loggingFilter() { LoggingFilter filter = new LoggingFilter(); FilterRegistrationBean registrationBean = new FilterRegistrationBean(); registrationBean.setFilter(filter); registrationBean.setUrlPatterns(Arrays.asList("/api/*")); return registrationBean; } Accessing API with RestTemplate

The final step I needed to do was figure out how to access my new and fancy API with RestTemplate. At first, I thought it would be easy. Then I realized that Spring Data produces a HAL-compliant API, so its content is embedded inside an "_embedded" JSON key.

After much trial and error, I discovered I needed to create a RestTemplate with HAL and Joda-Time awareness.

@Bean public RestTemplate restTemplate() { ObjectMapper mapper = new ObjectMapper(); mapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false); mapper.registerModule(new Jackson2HalModule()); mapper.registerModule(new JodaModule()); MappingJackson2HttpMessageConverter converter = new MappingJackson2HttpMessageConverter(); converter.setSupportedMediaTypes(MediaType.parseMediaTypes("application/hal+json")); converter.setObjectMapper(mapper); StringHttpMessageConverter stringConverter = new StringHttpMessageConverter(); stringConverter.setSupportedMediaTypes(MediaType.parseMediaTypes("application/xml")); List<HttpMessageConverter<?>> converters = new ArrayList<>(); converters.add(converter); converters.add(stringConverter); return new RestTemplate(converters); }

The JodaModule was provided by the following dependency:

<dependency> <groupId>com.fasterxml.jackson.datatype</groupId> <artifactId>jackson-datatype-joda</artifactId> </dependency>

With the configuration complete, I was able to write a MessagesApiITest integration test that posts a request and retrieves it using the API. The API was secured using basic authentication, so it took me a bit to figure out how to make that work with RestTemplate. Willie Wheeler's Basic Authentication With Spring RestTemplate was a big help.

@RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(classes = IntegrationTestConfig.class) public class MessagesApiITest { private final static Log log = LogFactory.getLog(MessagesApiITest.class); @Value("http://${app.host}/api/initiate") private String initiateAPI; @Value("http://${app.host}/api/messages") private String messagesAPI; @Value("${app.host}") private String host; @Inject private RestTemplate restTemplate; @Before public void setup() throws Exception { String request = new Scanner(new ClassPathResource("sample-request.xml").getFile()).useDelimiter("\\Z").next(); ResponseEntity<org.ncpdp.schema.transport.Message> response = restTemplate.exchange(getTestUrl(initiateAPI), HttpMethod.POST, getBasicAuthHeaders(request), org.ncpdp.schema.transport.Message.class, Collections.emptyMap()); assertEquals(HttpStatus.OK, response.getStatusCode()); } @Test public void testGetMessages() { HttpEntity<String> request = getBasicAuthHeaders(null); ResponseEntity<PagedResources<Message>> result = restTemplate.exchange(getTestUrl(messagesAPI), HttpMethod.GET, request, new ParameterizedTypeReference<PagedResources<Message>>() {}); HttpStatus status = result.getStatusCode(); Collection<Message> messages = result.getBody().getContent(); log.debug("messages found: " + messages.size()); assertEquals(HttpStatus.OK, status); for (Message message : messages) { log.debug("message.id: " + message.getId()); log.debug("message.dateCreated: " + message.getDateCreated()); } } private HttpEntity<String> getBasicAuthHeaders(String body) { String plainCreds = "user:pass"; byte[] plainCredsBytes = plainCreds.getBytes(); byte[] base64CredsBytes = Base64.encodeBase64(plainCredsBytes); String base64Creds = new String(base64CredsBytes); HttpHeaders headers = new HttpHeaders(); headers.add("Authorization", "Basic " + base64Creds); headers.add("Content-type", "application/xml"); if (body == null) { return new HttpEntity<>(headers); } else { return new HttpEntity<>(body, headers); } } }

To get Spring Data to populate the message id, I created a custom RestConfig class to expose it. I learned how to do this from Tommy Ziegler.

/** * Used to expose ids for resources. */ @Configuration public class RestConfig extends RepositoryRestMvcConfiguration { @Override protected void configureRepositoryRestConfiguration(RepositoryRestConfiguration config) { config.exposeIdsFor(Message.class); config.setBaseUri("/api"); } } Summary

This article explains how I built a REST API using JAXB, Spring Boot, Spring Data and Liquibase. It was relatively easy to build, but required some tricks to access it with Spring's RestTemplate. Figuring out how to customize JAXB's code generation was also essential to make things work.

I started developing the project with Spring Boot 1.1.7, but upgraded to 1.2.0.M2 after I found it supported Log4j2 and configuring Spring Data REST's base URI in application.yml. When I handed the project off to my client last week, it was using 1.2.0.BUILD-SNAPSHOT because of a bug when running in Tomcat.

This was an enjoyable project to work on. I especially liked how easy Spring Data makes it to expose JPA entities in an API. Spring Boot made things easy to configure once again and Liquibase seems like a nice tool for database migrations.

If someone asked me to develop a REST API on the JVM, which frameworks would I use? Spring Boot, Spring Data, Jackson, Joda-Time, Lombok and Liquibase. These frameworks worked really well for me on this particular project.

Categories: FLOSS Project Planets

Rhonda D'Vine: Feminist Year

Planet Debian - Wed, 2014-10-29 15:47

If someone would have told me that I would visit three feminist events this year I would have slowly nodded at them and responded with "yeah, sure..." not believing it. But sometimes things take their own turns.

It all started with the Debian Women Mini-Debconf in Barcelona. The organizers did ask me how they have to word the call for papers so that I would feel invited to give a speech, which felt very welcoming and nice. So we settled for "people who identify themselves as female". Due to private circumstances I didn't prepare well for my talk, but I hope it was still worth it. The next interesting part though happened later when there were lightning talks. Someone on IRC asked why there are male people in the lightning talks, which was explicitly allowed for them only. This also felt very very nice, to be honest, that my talk wasn't questioned. Those are amongst the reasons why I wrote My place is here, my home is Debconf.

Second event I went to was the FemCamp Wien. It was my first event that was a barcamp, I didn't know what to expect organization wise. Topic-wise it was set about Queer Feminism. And it was the first event that I went to which had a policy. Granted, there was an extremely silly written part in it, which naturally ended up in a shit storm on twitter (which people from both sides did manage very badly, which disappointed me). Denying that there is sexism against cis-males is just a bad idea, but the background of it was that this wasn't the topic of this event. The background of the policy was that usually barcamps but events in general aren't considered that save of a place for certain people, and that this barcamp wanted to make it clear that people usually shying away from such events in the fear of harassment can feel at home there.
And what can I say, this absolutely was the right thing to do. I never felt any more welcomed and included in any event, including Debian events—sorry to say that so frankly. Making it clear through the policy that everyone is on the same boat with addressing each other respectfully totally managed to do exactly that. The first session of the event about dominant talk patterns and how to work around or against them also made sure that the rest of the event was giving shy people a chance to speak up and feel comfortable, too. And the range of the sessions that were held was simply great. This was the event that I came up with the pattern that I have to define the quality of an event on the sessions that I'm unable to attend. The thing that hurt me most in the afterthought was that I couldn't attend the session about minorities within minorities. :/

Last but not least I attended AdaCamp Berlin. This was a small unconference/barcamp dedicated to increase women's participation in open technology and culture named after Ada Lovelace who is considered the first programmer. It was a small event with only 50 slots for people who identify as women. So I was totally hyper when I received the mail that was accepted. It was another event with a policy, and at first reading it looked strange. But given that there are people who are allergic to ingredients of scents, it made sense to raise awareness of that topic. And given that women are facing a fair amount of harassment in the IT and at events, it also makes sense to remind people to behave. After all it was a general policy for all AdaCamps, not for this specific one with only women.
I enjoyed the event. Totally. And that's not only because I was able to meet up with a dear friend who I haven't talked to in years, literally. I enjoyed the environment, and the sessions that were going on. And quite similar to the FemCamp, it started off with a session that helped a lot for the rest of the event. This time it was about the Impostor Syndrome which is extremely common for women in IT. And what can I say, I found myself in one of the slides, given that I just tweeted the day before that I doubted to belong there. Frankly spoken, it even crossed my mind that I was only accepted so that at least one trans person is there. Which is pretty much what the impostor syndrome is all about, isn't it. But when I was there, it did feel right. And we had great sessions that I truly enjoyed. And I have to thank one lady once again for her great definition on feminism that she brought up during one session, which is roughly that feminism for her isn't about gender but equality of all people regardless their sexes or gender definition. It's about dropping this whole binary thinking. I couldn't agree more.

All in all, I totally enjoyed these events, and hope that I'll be able to attend more next year. From what I grasped all three of them think of doing it again, the FemCamp Vienna already has the date announced at the end of this year's event, so I am looking forward to meet most of these fine ladies again, if faith permits. And keep in mind, there will always be critics and haters out there, but given that thy wouldn't think of attending such an event anyway in the first place, don't get wound up about it. They just try to talk you down.

P.S.: Ah, almost forgot about one thing to mention, which also helps a lot to reduce some barrier for people to attend: The catering during the day and for lunch both at FemCamp and AdaCamp (there was no organized catering at the Debian Women Mini-Debconf) did take off the need for people to ask about whether there could be food without meat and dairy products by offering mostly Vegan food in the first place, even without having to query the participants. Often enough people otherwise choose to go out of the event or bring their own food instead of asking for it, so this is an extremely welcoming move, too. Way to go!

/personal | permanent link | Comments: 0 | Flattr this

Categories: FLOSS Project Planets

knotification, kde connect and what we can do to make future connected

Planet KDE - Wed, 2014-10-29 15:19

So, i was “politely” annoying people on kde channels last days because i found some interconnected pieces of KDE software that is not really integrated, but are screaming to do that.

All started when i questioned on our older knotify dialog, which for years we blamed to not be extensible, and mck182,  our own Martin Klapetek, give me a class about the wonders of knotification on Frameworks 5. What lead me to this, is the desire of have several devices ( and possibly machines ) interconnecting their notifications, in both ways. and mostly KDE Connect is one of the middle guys to do that.

At same time, not expected but welcome, Albert Vaca  blogged about the other side of my idea, the goals for KDE Connect. He clearly asked of usable plugins to do exchange of information on devices, like Android, iOS, etc..

Where we as KDE are not seeing is that this, like all of our frameworks, should be transparent, in a way that ANY frameworks application that uses knotification could directly access ANY device registered, in both ways.

Simple example:

We work daily on computers, mostly on the time want to be not bothered. We receive a message coming from some person on mobile, then suddenly, unless we muted, the notification appears on your screen, even far from your mobile. KDE Connect ALREADY do that.  But, if is other way ? I have konversation irc opened, but i marked away, then someone from work ping me about some important thing. Konversation could pick this message and forward to my mobile. Or ANY device i choose to do that. This KDE Connect not do, and probably not designed to do.

And what could be done to aiming for this kind of future. ?

First of all, i think i should be got to VDG group to create a feasible knotification extended dialog where we can register multiple endpoints ,like, one mobile android, one mobile iOS, one tablet Windows, one desktop, and then we can simple connect our software notifications to hits on that. And receive from this or that mobile.

This conceptually is easy to imagine if we are using a diagram representing the devices and the notification sources, but i hardly imagine how this could be put the the desktop ground, that’s why VDG is my bet.

Second, we need make some unification on the KDE Connect plugins idea and our knotification system, so talking talk as one framework, this i would love to discuss with Albert and Martin, and everyone that can buy this idea, if someone on the deep grounds of KDE labs already not started.

Hope someone buy this idea and push forward, because this is kind of thing is impossible to do it alone, and thanks our beloved KDE project is specialized to work as a family, and make things happens.

 

 

Categories: FLOSS Project Planets

Kubuntu Vivid in Bright Blue

Planet KDE - Wed, 2014-10-29 15:11
KDE Project:

Kubuntu Vivid is the development name for what will be released in April next year as Kubuntu 15.04.

The exiting news is that following some discussion and some wavering we will be switching to Plasma 5 by default. It has shown itself as a solid and reliable platform and it's time to show it off to the world.

There are some bits which are missing from Plasma 5 and we hope to fill those in over the next six months. Click on our Todo board above if you want to see what's in store and if you want to help out!

The other change that affects workflow is we're now using Debian git to store our packaging in a kubuntu branch so hopefully it'll be easier to share updates.

Categories: FLOSS Project Planets

freeipmi @ Savannah: FreeIPMI 1.4.6 Released

GNU Planet! - Wed, 2014-10-29 14:50

http://ftp.gnu.org/gnu/freeipmi/freeipmi-1.4.6.tar.gz

FreeIPMI 1.4.6 - 10/29/14
-------------------------
o In ipmi-fru, support output of DDR4 SDRAM modules.
o Fix EFI probing on non IA64 systems.
o Fix corner case in ipmi-raw w/ standard input or --file and empty lines.
o Fix parsing corner case in ipmi-chassis.
o Support SSIF bridging.

Categories: FLOSS Project Planets

Steve Kemp: A brief introduction to freebsd

Planet Debian - Wed, 2014-10-29 14:37

I've spent the past thirty minutes installing FreeBSD as a KVM guest. This mostly involved fetching the ISO (I chose the latest stable release 10.0), and accepting all the defaults. A pleasant experience.

As I'm running KVM inside screen I wanted to see the boot prompt, etc, via the serial console, which took two distinct steps:

  • Enabling the serial console - which lets boot stuff show up
  • Enabling a login prompt on the serial console in case I screw up the networking.

To configure boot messages to display via the serial console, issue the following command as the superuser:

# echo 'console="comconsole"' >> /boot/loader.conf

To get a login: prompt you'll want to edit /etc/ttys and change "off" to "on" and "dialup" to "vt100" for the ttyu0 entry. Once you've done that reload init via:

# kill -HUP 1

Enable remote root logins, if you're brave, or disable PAM and password authentication if you're sensible:

vi /etc/ssh/sshd_config /etc/rc.d/sshd restart

Configure the system to allow binary package-installation - to be honest I was hazy on why this was required, but I ran the two command and it all worked out:

pkg pkg2ng

Now you may install a package via a simple command such as:

pkg add screen

Removing packages you no longer want is as simple as using the delete option:

pkg delete curl

You can see installed packages via "pkg info", and there are more options to be found via "pkg help". In the future you can apply updates via:

pkg update && pkg upgrade

Finally I've installed 10.0-RELEASE which can be upgraded in the future via "freebsd-update" - This seems to boil down to "freebsd-update fetch" and "freebsd-update install" but I'm hazy on that just yet. For the moment you can see your installed version via:

uname -a ; freebsd-version

Expect my future CPAN releases, etc, to be tested on FreeBSD too now :)

Categories: FLOSS Project Planets

Paul Everitt: Faster relevance ranking didn&#8217;t make it into PostgreSQL 9.4

Planet Python - Wed, 2014-10-29 13:34

Alas, the one big feature we really needed, the patch apparently got rejected.

PostgreSQL has a nice little full-text search story, especially when you combine it with other parts of our story (security-aware filtering of results, transactional integrity, etc.) Searches are very, very fast.

However, the next step — ranking the results — isn’t so fast. It requires a table scan (likely to TOAST files, meaning read a file and gunzip its contents) on every row that matched.

In our case, we’re doing prefix searches, and lots and lots of rows match. Lots. And the performance is, well, horrible. Oleg and friends had a super-fast speedup for this ready for PostgreSQL 9.4, but it apparently got rejected.

So we’re stuck. It’s too big a transition to switch to ElasticSearch or something. The customer probably should bail on prefix searching (autocomplete) but they won’t. We have an idea for doing this the right way (convert prefixes to candidate full words, as Google does, using PG’s built-in lexeme tools) but that is also too much for budget. Finally, we don’t have the option to throw SSDs at it.


Categories: FLOSS Project Planets

Metal Toad: Seeing Long Term Technology Adoption as Evolution

Planet Drupal - Wed, 2014-10-29 13:31

Much like an evolutionary tree our goal in technology adoption is too continue to move forward and evolve, rather than getting caught in a dead end.  In the natural world, becoming bigger can be good but can lead to extinction events should the environment or food source change.  Right now we are in a technology Jurassic...

Categories: FLOSS Project Planets

Andriy Kornatskyy: wheezy web: RESTful API Design

Planet Python - Wed, 2014-10-29 13:18
In this article we are going to explore a simple RESTful API created with wheezy.web framework. The demo implements a CRUD for tasks. Includes entity validation, content caching with dependencies and functional test cases. The source code is structured with well defined actors (you can read more about it here). Design The following convention is used with respect to operation, HTTP method (verb
Categories: FLOSS Project Planets

PyPy Development: A Field Test of Software Transactional Memory Using the RSqueak Smalltalk VM

Planet Python - Wed, 2014-10-29 12:55
Extending the Smalltalk RSqueakVM with STM

by Conrad Calmez, Hubert Hesse, Patrick Rein and Malte Swart supervised by Tim Felgentreff and Tobias Pape

Introduction

After pypy-stm we can announce that through the RSqueakVM (which used to be called SPyVM) a second VM implementation supports software transactional memory. RSqueakVM is a Smalltalk implementation based on the RPython toolchain. We have added STM support based on the STM tools from RPython (rstm). The benchmarks indicate that linear scale up is possible, however in some situations the STM overhead limits speedup.

The work was done as a master's project at the Software Architechture Group of Professor Robert Hirschfeld at at the Hasso Plattner Institut at the University of Potsdam. We - four students - worked about one and a half days per week for four months on the topic. The RSqueakVM was originally developped during a sprint at the University of Bern. When we started the project we were new to the topic of building VMs / interpreters.

We would like to thank Armin, Remi and the #pypy IRC channel who supported us over the course of our project. We also like to thank Toni Mattis and Eric Seckler, who have provided us with an initial code base.

Introduction to RSqueakVM

As the original Smalltalk implementation, the RSqueakVM executes a given Squeak Smalltalk image, containing the Smalltalk code and a snapshot of formerly created objects and active execution contexts. These execution contexts are scheduled inside the image (greenlets) and not mapped to OS threads. Thereby the non-STM RSqueakVM runs on only one OS thread.

Changes to RSqueakVM

The core adjustments to support STM were inside the VM and transparent from the view of a Smalltalk user. Additionally we added Smalltalk code to influence the behavior of the STM. As the RSqueakVM has run in one OS thread so far, we added the capability to start OS threads. Essentially, we added an additional way to launch a new Smalltalk execution context (thread). But in contrast to the original one this one creates a new native OS thread, not a Smalltalk internal green thread.

STM (with automatic transaction boundaries) already solves the problem of concurrent access on one value as this is protected by the STM transactions (to be more precise one instruction). But there are cases were the application relies on the fact that a bigger group of changes is executed either completely or not at all (atomic). Without further information transaction borders could be in the middle of such a set of atomic statements. rstm allows to aggregate multiple statements into one higher level transaction. To let the application mark the beginning and the end of these atomic blocks (high-level transactions), we added two more STM specific extensions to Smalltalk.

Benchmarks

RSqueak was executed in a single OS thread so far. rstm enables us to execute the VM using several OS threads. Using OS threads we expected a speed-up in benchmarks which use multiple threads. We measured this speed-up by using two benchmarks: a simple parallel summation where each thread sums up a predefined interval and an implementation of Mandelbrot where each thread computes a range of predefined lines.

To assess the speed-up, we used one RSqueakVM compiled with rstm enabled, but once running the benchmarks with OS threads and once with Smalltalk green threads. The workload always remained the same and only the number of threads increased. To assess the overhead imposed by the STM transformation we also ran the green threads version on an unmodified RSqueakVM. All VMs were translated with the JIT optimization and all benchmarks were run once before the measurement to warm up the JIT. As the JIT optimization is working it is likely to be adoped by VM creators (the baseline RSqueakVM did that) so that results with this optimization are more relevant in practice than those without it. We measured the execution time by getting the system time in Squeak. The results are:

Parallel Sum Ten Million Benchmark Parallel Sum 10,000,000 Thread Count RSqueak green threads RSqueak/STM green threads RSqueak/STM OS threads Slow down from RSqueak green threads to RSqueak/STM green threads Speed up from RSqueak/STM green threads to RSQueak/STM OS Threads 1 168.0 ms 240.0 ms 290.9 ms 0.70 0.83 2 167.0 ms 244.0 ms 246.1 ms 0.68 0.99 4 167.8 ms 240.7 ms 366.7 ms 0.70 0.66 8 168.1 ms 241.1 ms 757.0 ms 0.70 0.32 16 168.5 ms 244.5 ms 1460.0 ms 0.69 0.17
Parallel Sum One Billion Benchmark Parallel Sum 1,000,000,000
Thread CountRSqueak green threadsRSqueak/STM green threadsRSqueak/STM OS threadsSlow down from RSqueak green threads to RSqueak/STM green threadsSpeed up from RSqueak/STM green threads to RSQueak/STM OS Threads 1 16831.0 ms 24111.0 ms 23346.0 ms 0.70 1.03 2 17059.9 ms 24229.4 ms 16102.1 ms 0.70 1.50 4 16959.9 ms 24365.6 ms 12099.5 ms 0.70 2.01 8 16758.4 ms 24228.1 ms 14076.9 ms 0.69 1.72 16 16748.7 ms 24266.6 ms 55502.9 ms 0.69 0.44
Mandelbrot Iterative Benchmark Mandelbrot Thread Count RSqueak green threads RSqueak/STM green threads RSqueak/STM OS threads Slow down from RSqueak green threads to RSqueak/STM green threads Speed up from RSqueak/STM green threads to RSqueak/STM OS Threads 1 724.0 ms 983.0 ms 1565.5 ms 0.74 0.63 2 780.5 ms 973.5 ms 5555.0 ms 0.80 0.18 4 781.0 ms 982.5 ms 20107.5 ms 0.79 0.05 8 779.5 ms 980.0 ms 113067.0 ms 0.80 0.01
Discussion of benchmark results

First of all, the ParallelSum benchmarks show that the parallelism is actually paying off, at least for sufficiently large embarrassingly parallel problems. Thus RSqueak can also benefit from rstm.

On the other hand, our Mandelbrot implementation shows the limits of our current rstm integration. We implemented two versions of the algorithm one using one low-level array and one using two nested collections. In both versions, one job only calculates a distinct range of rows and both lead to a slowdown. The summary of the state of rstm transactions shows that there are a lot of inevitable transactions (transactions which must be completed). One reason might be the interactions between the VM and its low-level extensions, so called plugins. We have to investigate this further.

Limitations

Although the current VM setup is working well enough to support our benchmarks, the VM still has limitations. First of all, as it is based on rstm, it has the current limitation of only running on 64-bit Linux.

Besides this, we also have two major limitations regarding the VM itself. First, the atomic interface exposed in Smalltalk is currently not working, when the VM is compiled using the just-in-time compiler transformation. Simple examples such as concurrent parallel sum work fine while more complex benchmarks such as chameneos fail. The reasons for this are currently beyond our understanding. Second, Smalltalk supports green threads, which are threads which are managed by the VM and are not mapped to OS threads. We currently support starting new Smalltalk threads as OS threads instead of starting them as green threads. However, existing threads in a Smalltalk image are not migrated to OS threads, but remain running as green threads.

Future work for STM in RSqueak The work we presented showed interesting problems, we propose the following problem statements for further analysis:
  • Inevitable transactions in benchmarks. This looks like it could limit other applications too so it should be solved.
  • Collection implementation aware of STM: The current implementation of collections can cause a lot of STM collisions due to their internal memory structure. We believe it could bear potential for performance improvements, if we replace these collections in an STM enabled interpreter with implementations with less STM collisions. As already proposed by Remi Meier, bags, sets and lists are of particular interest.
  • Finally, we exposed STM through languages features such as the atomic method, which is provided through the VM. Originally, it was possible to model STM transactions barriers implicitly by using clever locks, now its exposed via the atomic keyword. From a language design point of view, the question arises whether this is a good solution and what features an stm-enabled interpreter must provide to the user in general? Of particular interest are for example, access to the transaction length and hints for transaction borders to and their performance impact.
    Details for the technically inclined
    • Adjustments to the interpreter loop were minimal.
    • STM works on bytecode granularity that means, there is a implicit transaction border after every bytecode executed. Possible alternatives: only break transactions after certain bytecodes, break transactions on one abstraction layer above, e.g. object methods (setter, getter).
    • rstm calls were exposed using primtives (a way to expose native code in Smalltalk), this was mainly used for atomic.
    • Starting and stopping OS threads is exposed via primitives as well. Threads are started from within the interpreter.
    • For Smalltalk enabled STM code we currently have different image versions. However another way to add, load and replace code to the Smalltalk code base is required to make a switch between STM and non-STM code simple.
      Details on the project setup

      From a non-technical perspective, a problem we encountered was the huge roundtrip times (on our machines up to 600s, 900s with JIT enabled). This led to a tendency of bigger code changes ("Before we compile, let's also add this"), lost flow ("What where we doing before?") and different compiled interpreters in parallel testing ("How is this version different from the others?") As a consequence it was harder to test and correct errors. While this is not as much of a problem for other RPython VMs, RSqueakVM needs to execute the entire image, which makes running it untranslated even slower.

      Summary

      The benchmarks show that speed up is possible, but also that the STM overhead in some situations can eat up the speedup. The resulting STM-enabled VM still has some limitations: As rstm is currently only running on 64-bit Linux the RSqueakVM is doing so as well. Eventhough it is possible for us now to create new threads that map to OS threads within the VM, the migration of exiting Smalltalk threads keeps being problematic.

      We showed that an existing VM code base can benefit of STM in terms of scaling up. Further it was relatively easy to enable STM support. This may also be valuable to VM developers considering to get STM support for their VMs.

      Categories: FLOSS Project Planets

      Patrick Matthäi: geoip and geoip-database news!

      Planet Debian - Wed, 2014-10-29 11:43

      Hi,

      geoip version 1.6.2-2 and geoip-database version 20141027-1 are now available in Debian unstable/sid, with some news of more free databases available :)

      geoip changes:

      * Add patch for geoip-csv-to-dat to add support for building GeoIP city DB. Many thanks to Andrew Moise for contributing! * Add and install geoip-generator-asn, which is able to build the ASN DB. It is a modified version from the original geoip-generator. Much thanks for contributing also to Aaron Gibson! * Bump Standards-Version to 3.9.6 (no changes required).

      geoip-database changes:

      * New upstream release. * Add new databases GeoLite city and GeoLite ASN to the new package geoip-database-extra. Also bump build depends on geoip to 1.6.2-2. * Switch to xz compression for the orig tarball.

      So much thanks to both contributors!

      Categories: FLOSS Project Planets

      Stefane Fermigier: Abilian annonce le début de la commercialisation de son extranet métier dédié aux clusters et aux pôles de compétitivité

      Planet Apache - Wed, 2014-10-29 11:03

      A l'occasion de l'Open World Forum 2014, Abilian, éditeur de solutions open source au service de la compétitivité des entreprises, dont je suis le fondateur et le CEO, annonce le lancement de la commercialisation de son offre dédiée aux clusters et aux pôles de compétitivité: Abilian SICS-PC (Système d'Information Collaboratif Sécurisé pour les Pôles de Compétitivité).

      La suite sur le site d'Abilian.

      Categories: FLOSS Project Planets

      Ed Crewe: Fixing third party Django packages for Python 3

      Planet Python - Wed, 2014-10-29 10:44
      With the release of Django 1.7 it could be argued that the balance has finally tipped towards Python 3 being its preferred platform. Well given Python 2.7 is the last 2.* then its probably time we all thought about moving to Python 3 for our Django deployments.

      Problem is those pesky third party package developers, because unless you are determined wheel reinventor (unlikely if you use Django!) - you are bound to have a range of third party eggs in your Django sites. As one of those pesky third party developers myself, it is about time I added Python 3 compatibility to my Django open source packages.

      There are a number of resources related to porting Python from 2 to 3, including specifically for Django, but hopefully this post may still prove useful as a summarised approach for doing it for your Django projects or third party packages. Hopefully it isn't too much work and if you have been writing Python as long as me, it may also get you out of any legacy syntax  habits you have.

      So lets get started, first thing is to set up Django 1.7 with Python 3
      For repeatable builds we want pip and virtualenv - if they are not there.
      For a linux platform such as Ubuntu you will have python3 installed as standard (although not yet the default python) so if you just add pip3 that lets you add the rest ...

      Install Python 3 and Django for testing
      sudo apt-get install python3-pip
      (OR sudo easy_install3 pip)
      sudo pip3 install virtualenv


      So now you can run virtualenv with python3 in addition to the default python (2.*)

      virtualenv --python=python3 myenv3
      cd myenv3
      bin/pip install django


      Then add a src directory for putting the egg in you want to make compatible with Python 3 so an example from git (of course you can do this as one pip line if the source is in git)


      mkdir src
      git clone https://github.com/django-pesky src/django-pesky
      bin/pip install -e src/django-pesky


      Then run the django-pesky tests (assuming nobody uses an egg without any tests!)
      so the command to run pesky's test may be something like the following ...

      bin/django-admin.py test pesky.tests --settings=pesky.settings
      One rather disconcerting thing that you will notice with tests is that the default assertEqual message is truncated in Python 3 where it wasn't in Python 2 with a count of the missing characters in square brackets, eg.

      AssertionError: Lists differ: ['Failed to open file /home/jango/myenv/sr[85 chars]tem'] != []


      Common Python 2 to Python 3 errors
      And wait for those errors. The most common ones are:

      1. print statement without brackets
      2. except Error as err (NOT except Error, err)
      3. File open and file methods differ.
        Text files require better quality encoding - so more files default to bytes because strings in Python 3 are all stored in unicode
        (On the down side this may need more work for initial encoding clean up *,
        but on the plus side functional errors due to bad encoding are less likely to occur)
      4. There is no unicode() method in Python 3 since all strings are now unicode - ie. its become str() and hence strings no longer need the u'string' marker 
      5. Since unicode is not available as a method, it is not used for Django models default representation. Hence just using
        def __str__(self):
                return self.name
        is the future proofed method. I actually found that models with __unicode__ and __str__ methods may not return any representation, rather than the __str__ one being used, as one might assume, in Django 1.7 and Python 3
      6. dictionary has_key has gone, must use in (if key in dict)

      * I found more raw strings were treated as bytes by Python 3 and these then required raw_string.decode(charset) to avoid them going into the database string (eg. varchar) fields as pseudo-bytes, ie. strings that held 'élément' as '\xc3\xa9l\xc3\xa9ment' rather than bytes, ie. b'\xc3\xa9l\xc3\xa9ment'

      Ideally you will want to maintain one version but keep it compatible with Python 2 and 3,
      since this is both less work and gets you into the habit of writing transitional Python :-)

      Test the same code against Python 2 and 3
      So to do that you want to be running your tests with builds in both Pythons.
      So repeat the above but with virtualenv --python=python2 myenv2
      and just symlink the src/django-pesky to the Python 2 src folder.

      Now you can run the tests for both versions against the same egg code -
      and make sure when you fix for 3 you don't break for 2.

      For current Django 1.7 you would just need to support the latest Python 2.7
      and so the above changes are all compatible except for use of unicode() and how you call open().

      Version specific code
      However in some cases you may need to write code that is specific to 2 or 3.
      If that occurs you can either use the approach of latest or anything else (cross fingers)

      try:
          latest version compatible code (e.g. Python 3 - Django 1.7)
      except:
          older version compatible code (e.g. Python 2 - Django < 1.7)

      Or you can use specific version targetting ...

      import sys, django
      django_version = django.get_version().split('.')

      if sys.version_info['major'] == 3 and django_version[1] == 7:
          latest version
      elif sys.version_info['major'] == 2 and django_version[1] == 6:
          older django version
      else:
          older version

      where ...

      django.get_version() -> '1.6' or '1.7.1'
      sys.version_info() -> {'major':3, 'minor':4, 'micro':0, 'releaselevel':'final', 'serial':0}

      SummarySo how did I get on with my first egg, django-csvimport ? ... it actually proved quite time consuming since the csv.reader library was far more sensitive to bad character encoding in Python 3 and so a more thorough manual alternative had to be implemented for those important edge cases - which the tests are aimed to cover. After all if a CSV file is really well encoded and you already have a model for it - it hardly needs a pesky third party egg for CSV imports - just a few django shell lines using the csv library will do the job.







      Categories: FLOSS Project Planets

      A. Jesse Jiryu Davis: Toro 0.7 Released

      Planet Python - Wed, 2014-10-29 10:09

      I've just released version 0.7 of Toro. Toro provides semaphores, locks, events, conditions, and queues for Tornado coroutines. It enables advanced coordination among coroutines, similar to what you do in a multithreaded application. Get the latest version with "pip install --upgrade toro". Toro's documentation, with plenty of examples, is on ReadTheDocs.

      There is one bugfix in this release. Semaphore.wait() is supposed to wait until the semaphore can be acquired again:

      @gen.coroutine def coro(): sem = toro.Semaphore(1) assert not sem.locked() # A semaphore with initial value of 1 can be acquired once, # then it's locked. sem.acquire() assert sem.locked() # Wait for another coroutine to release the semaphore. yield sem.wait()

      ... however, there was a bug and the semaphore didn't mark itself "locked" when it was acquired, so "wait" always returned immediately. I'm grateful to "abing" on GitHub for noticing the bug and contributing a fix.

      Categories: FLOSS Project Planets

      Code Karate: Finding the right brand

      Planet Drupal - Wed, 2014-10-29 07:28

      If you have been around CodeKarate.com for awhile you have noticed that our branding has been, we

      Categories: FLOSS Project Planets

      Mike Gabriel: Join us at "X2Go: The Gathering 2014"

      Planet Debian - Wed, 2014-10-29 07:27

      TL;DR; Those of you who are not able to join "X2Go: The Gathering 2014"... Join us on IRC (#x2go on Freenode) over the coming weekend. We will provide information, URLs to our TinyPads, etc. there. Spontaneous visitors are welcome during the working sessions (please let us know if you plan to come around), but we don't have spare beds anymore for accomodation. (We are still trying hard to set up some sort of video coverage--may it be life streaming or recorded sessions, this is still open, people who can offer help, see below).

      Our event "X2Go: The Gathering 2014" is approaching quickly. We will meet with a group of 13-15 people (number of people is still slightly fluctuating) at Linux Hotel, Essen. Thanks to the generous offerings of the Linux Hotel [1] to FLOSS community projects, costs of food and accommodation could be kept really low and affordable to many people.

      We are very happy that people from outside Germany are coming to that meeting (Michael DePaulo from the U.S., Kjetil Fleten (http://fleten.net) from Denmark / Norway). And we are also proud that Martin Wimpress (Mr. Ubuntu MATE Remix) will join our gathering.

      In advance, I want to send a big THANK YOU to all people who will sponsor our weekend, either by sending gift items, covering travel expenses or providing help and knowledge to make this event a success for the X2Go project and its community around.

      read more

      Categories: FLOSS Project Planets

      resolv_wrapper 1.0.0 – the new cwrap tool

      Planet KDE - Wed, 2014-10-29 07:24

      I’ve released a new preloadable wrapper named resolv_wrapper which can be used for nameserver redirection or DNS response faking. It can be used in testing environment to route DNS queries to a real nameserver separate from resolv.conf or fake one with simple config file. We tested it on Linux, FreeBSD and Solaris. It should work on other UNIX flavors too.

      You can download resolv_wrapper here.

      Categories: FLOSS Project Planets

      Acquia: Drupal in the Philipines, Own your Own Code & More - Luc Bézier

      Planet Drupal - Wed, 2014-10-29 07:00
      Language English On being an open source developer

      "Like a lot of people, I did both sides of technology; working on paid, proprietary systems [and open source]. There is a big difference. I can't imagine myself going back to any proprietary system where I have to pay; I can't share the code I am doing with anyone; I have to ask a company about the right tool to use. I love the way that everybody contributes to the same piece of code, trying to make it the best ... and for free!"

      Categories: FLOSS Project Planets
      Syndicate content