Data Lab @ codecentric

I am happy to announce Data Lab @ codecentric.

With Data Lab @ codecentric, we want to extend and to focus our technical and subject-specific expertise in data analysis, data mining, data security and data privacy as well as in corresponding areas.

With a team of young and seasoned specialists, we will not only push our knowledge on, but also continuously run experiments, create demos, find use cases etc. and publicly report about it.

First results to come soon, so stay tuned on the blog.

Pavlo Baron

How to use Wikipedia’s full dump as corpus for text classification with NLTK

Wikipedia is not only a never ending rabbit hole of information. You start with an article on a topic you want to know about, and you end up hours later with an article that has nothing to do with the original topic you’ve looked up. And all the time, you’ve been just clicking your way from one article to another.

But from a different perspective, Wikipedia is probably the biggest crowd-sourced information platform with a built-in review process and as many languages as its users want it to be (despite the fact that, together with Google, it has almost completely ousted printed encyclopaedias). So if this is not Big Data, then what is (pardon my sarcasm)?

And what is the most important part for this tiny post: Wikipedia comes with a more or less consistently maintained categorisation. Categories plus text itself are classes in natural language processing (NLP). So I just thought: why not use Wikipedia for text classification? So I ended up with an implementation of a natural language processing corpus based on Wikipedia’s full article dump, using groups of categories as classes and anti-classes. It can be used for whatever text you want to classify, of course as long as you follow Wikipedia’s terms of use and accept the categorisation and article quality. If you don’t, then, well, contribute and improve the quality like others do.

The whole code including a step by step usage instructions is out on GitHub: https://github.com/pavlobaron/wpcorpus. Any constructive feedback and help are welcome.

Pavlo Baron

Graphlr: indexing antlr3 generated Java AST through a Neo4j graph

While working on my Sonar fork which allows to simulate refactoring without actually touching source files I have once again realized what a PITA it is to traverse the antlr-generated Abstract Syntax Tree (AST) for Java. The mechanism is absolutely cool, no doubt. But the final AST representation is not intuitive, and the corresponding traversal code always looks ugly.

While intensively working with Neo4j, I just asked myself: wouldn’t it be nice to use it as index for the Java-AST? You only need to jump to a relevant node and still can use the classic AST-traversal to get details out of it. Or you could wire the whole AST through an accompanying graph and thus use the graph to traverse the whole AST.

(read more…)

Pavlo Baron