12/27/2013

BYOX, Security, UX, Responsibility

There are 2 sides to adopting mobility. First is desire for innovation, the opportunity to drive better business with new technologies and thus increase revenue. Second is the fear of changes, the threat of holes in new processes which could result in enormous losses. Today I'd like to talk about the balance between these two sides, the balance between usability and security.


The Opportunity 
(thoughts of a typical employee)

My day-to-day job requires lots of documents' processing and working with my company's internal tools. It also requires leaving to other offices, thus spending about 20% of my work time out of the office. I often save copies of documents on my laptop and mobile devices to be able to work with them on the move. It helps me doing my job faster. But recently all our department had to enroll for a program which forbids any storage of work documents. From now on I take pictures of most relevant documents. I realize that it's a security threat, but I'd better get my job done than conform to overrated security policies.

The Threat 
(thoughts of security officer)

My company operates extremely sensitive data. Its exposure may result in my company's bankruptcy. I control its storage by locating data centers in my own premises. I control its flaw by using secure mailing services from providers I can trust. Recently I got acknowledged that this data is under a great threat, because employees use tools to keep it on their mobile device. Fortunately I've overcome this challenge by introducing MDM solution into everyone's device. I've protected the data from any other usages by strict policies, which all of the employees are obliged to follow.

The Conflict
We see a typical clash of interests. An employee just looks for the easiest way to get his job done. Security officer wants to make sure its done in a safe way. Both are right. So where is the solution?
Which one should change the attitude? A user, who has signed security policy and will respond for violating it, or security officer, who's responsible for any data leakage?
I believe from this point I'm switching from objective to subjective thinking.

The Irony
I'll rephrase the last question. If data gets leaked, who will suffer more? The employee, who gets fired, pays fine and potentially goes to jail, or the Company, which will go through numerous courts and potentially cease to exist? Or put it other way - who's more aware of this risk? Whose interest it is to protect the data? Who should adapt to the situation? You see where I'm driving at.
The irony is that regardless of how much we try, we can't force people to drop their habits, change their own ways of doing things and impose our rules. The best we can do, is encourage them to switch to a new way, which would conform them even better. And here's why

The Good, The Bad and The Lazy
Back in 2007, Apple did a wonderful thing. They made it so simple to use a mobile phone, and so functional in a meantime! This have set a very high standards of expectations from users. As things got simpler people got lazier. And now an app can't be considered competitive unless UX is intuitive. If people need to make more than trivial efforts to get what they want from app they simply quit and use another one. That's where we've brought ourselves, that's what consumerization of IT is.

The Clash
And now this big wave of consumerization is getting into the enterprise. The world where large systems with dozens of buttons on one screen and frankly complex UX has been a standard for years, gets hit by these small screens and highly trivial use cases. The employee and simplicity of his actions start to be the priority. Ignoring this priority means that employee will be much less productive, or worse - he'll find another way to complete his action, bypassing all our validation and security rules
thanks for pic @daveslocombe
The Responsibility
But using fancy apps is not the same as using enterprise software. Work is not always fun and simple. And that's the part where users have to realize their responsibility as an employee. This is the difference between enterprise and consumer apps. Employees still have to use the app, even if they don't like it.
Their productivity is another question though.

The End
It's not about who's responsible for failure, it's about avoiding it. It's just that in this case, limitations are less effective than flexibility. It's possible to achieve same security goals without losing much productivity.

The Reality
I'd like to conclude with a real life example of what happens when employees are forced to the job they don't want to. I witnessed it 2 weeks ago, after I started writing this article. It was the middle of Ukrainian revolution 2013, November 11th, Kiev. People were protesting for 3rd week, and one night government gave order to stop the protest by "cleaning" the protesters by force. But most of the policeman didn't feel like fighting their co-citizens, who didn't do nothing wrong. They still couldn't refuse following their orders, otherwise they'd be arrested themselves. So what it ended up with was a very weak attack from police, with policeman uncertainly pushing the people. This resulted in strong victory by the protesters, who stood their ground and defended themselves, thus total failure of government's plans

12/18/2013

Solving the Santa Claus problem using Ocaml

Santa’s problem

Not much time left till Xmas.. Haven’t you ever wondered how Santa is doing all his business with those elves and reindeer? And why?
Even if you don’t care, there are people who do. At least one. John Trono of St. Michael’s College in Vermont was so much worried about how Santa can handle all that stuff, that he decided to emulate his work and wrote quite a simplified scenario of Santa’s life:

“Santa Claus sleeps in his shop at the North Pole and can only be awakened by either (1) all nine reindeer being back from their vacation in the South Pacific, or (2) some of the elves having difficulty making toys; to allow Santa to get some sleep, the elves can only wake him when three of them have problems. When three elves are having their problems solved, any other elves wishing to visit Santa must wait for those elves to return. If Santa wakes up to find three elves waiting at his shop’s door, along with the last reindeer having come back from the tropics, Santa has decided that the elves can wait until after Christmas, because it is more important to get his sleigh ready. (It is assumed that the reindeer do not want to leave the tropics, and therefore they stay there until the last possible moment.) The last reindeer to arrive must get Santa while the others wait in a warming hut before being harnessed to the sleigh.”

Besides given scenario, lets make some additional specifications:
  • After the ninth reindeer arrives, Santa must invoke prepare_sleigh, and then all nine reindeer must invoke get_hitched
  • After the third elf arrives, Santa must invoke help_elves. Concurrently, all three elves should invoke get_help.
  • All three elves must invoke get_help before any additional elves enter

Not very complicated, as you can see, but till the moment you try to implement it. To make the solution not that boring, I’ve decided to implement it in Ocaml, but not in any enterprise platform like .NET or Java. At the moment I’m writing this post, I haven’t managed to find an Ocaml solution on the internet. Ocaml is an ML-derived functional language with static typing, pattern matching and automatic garbage collection. It has fairly big standard library and nice native code compiler for a number of platforms. However, I’ve chosen it just to make solving Santa’s problem a bit more challenging and interesting, another words, just for fun. I’ll try to comment lines of code looking weird so don’t panic.

Pseudo-code solution

First, let’s solve Santa’s problem using pseudo-code. We’ll use elves and reindeer counters protected by a mutex, a semaphore for Santa (he waits until either an elf or reindeer signals him), a semaphore for reindeers (they wait until Santa signals them to get hitched), semaphore for elves (they wait until Santa helps them) and a mutex to prevent additional elves to enter while three elves are being helped.

Santa’s code is the easiest one (it runs in an infinite loop)
santa_sem.wait()
mutex.wait()
if reindeer == 9
{
prepare_sleigh()
reindeer_sem.signal(9)
}
else if elves == 3
{
help_elves()
elf_sem.signal(3)
}
mutex.signal()

Santa checks two conditions and either deals with elves or with reindeer. If there’re nine reindeer waiting, Santa prepares sleigh and signals reindeer semaphore nine times, allowing the reindeer to invoke get_hitched. If there are elves waiting, Santa invokes help_elves.

Code for reindeer is also not very complicated:
mutex.wait()
reindeer += 1
if reindeer == 9
{
santa_sem.signal()
}
mutex.signal()

reindeer_sem.wait()
get_hitched()

The ninth reindeer signals Santa and then joins the others waiting for reindeer_sem. When it’s signalled, they invoke get_hitched.

The code for elves is quite similar, but it uses another turnstile for three-elves-stuff:

elf_mutex.wait()
mutex.wait()
elves += 1
if elves == 3
{
santa_sem.signal()
}
else
{
elf_mutex.signal()
}
mutex.signal()

elf_sem.wait()
get_help()

mutex.wait()
elves -= 1
if elves == 0
{
elf_mutex.signal()
}
mutex.signal()


The first two elves release elf_mutex at the same time they release the mutex, but the last elf holds elf_mutex, preventing other elves from entering until all three elves have invoked get_help. The last elf to leave releases elf_mutex, allowing the next batch of elves to enter.

The Ocaml part

Now the time come to have some fun with Ocaml. First thing to mention is that Ocaml does not have any semaphore built-in class-or-something (it’s because of it’s rich standard library). But it’s not a big issue since it has Mutex and Condition classes (yeah, Ocaml is an Objective Caml and it do has classes) in the Threads library and we can use them to write our own semaphore. To make a semaphore more or less serious, let’s write it in the separate module.

module Semaphore = struct
  class semaphore initial_count initial_name =
    object (self)
      val mutable count = initial_count
      val name = initial_name
      val sync = Mutex.create()
      val cond = Condition.create()
          
      method inc n = count <- count + n
      method dec n = count <- count - n

      method signal ?(n=1) () =
        Mutex.lock sync;
        self#inc n;
        for i = 1 to n do
          Condition.signal cond
        done;
        Mutex.unlock sync

      method wait =
        Mutex.lock sync;
        while count = 0 do
          Condition.wait cond sync
        done;
        self#dec 1;
        Mutex.unlock sync
    end
end;;


My semaphore has internal mutable (yeah, Ocaml is not a pure functional language like Haskell is) field count (used for a “gate width“ for threads to enter semaphore simultaneously), internal name (used for logging, when I was looking for a deadlock in my code some time ago), one mutex and one condition variable. It has two primary methods: signal with optional parameter N and wait which are just usual increment-decrement/lock-unlock methods of semaphore: signal increments internal counter and if it’s positive, it allows threads to enter critical section which given semaphore guards and wait decrements counter and if it’s zero, calling thread is blocked until counter becomes positive.

If an Ocaml program is split into modules then you can expect a big fun trying to compile that program. First, you have to generate interfaces for that module. Secondly, you have to compile both interface and the module itself:

ocamlc -thread unix.cma threads.cma -i semaphore.ml > semaphore.mli
ocamlc -thread unix.cma threads.cma -c semaphore.mli
ocamlc -thread unix.cma threads.cma -c semaphore.ml


To enable multithreading support (in terms of Posix compatible threads) you have to compile your code with -thread key and include unix and threads compiled modules.

Now let’s write main program. Since we’re dealing with logging and multithreading, I’ve written a function-helper, which uses our Semaphore class to synchronize printing to stdout:

let stdout_sem = new Semaphore.semaphore 1 "stdout_sem";;
let puts s =
  stdout_sem#wait;
  Printf.printf "%s\n" s;
  flush stdout;
  stdout_sem#signal ();;


Next, I’m using some kind of transport structure for Santa, reindeer and elves functions (shared between them and protected by semaphore). This structure (a record in terms of Ocaml) contains counters and semaphores as discussed earlier:

type santa_counters = { mutable elves : int;
                        mutable reindeer : int;
                        santa_sem : Semaphore.semaphore;
                        reindeer_sem : Semaphore.semaphore;
                        elf_sem : Semaphore.semaphore;
                        elf_mutex : Semaphore.semaphore;
                        mutex : Semaphore.semaphore };;


and a simple initializer:

let new_santa_counters () = { elves = 0;
                              reindeer = 0;
                              santa_sem = new 
Semaphore.semaphore 0 "santa_sem";
                              reindeer_sem = new 
Semaphore.semaphore 0 "reindeer_sem";
                              elf_sem = new 
Semaphore.semaphore 0 "elf_sem";
                              elf_mutex = new 
Semaphore.semaphore 1 "elf_mutex";
                              mutex = new 
Semaphore.semaphore 1 "mutex" };;


To make our example more realistic I’ve implemented functions prepare_sleigh and others to see what’s actually happens using my helper for synchronized printing:

let prepare_sleigh () = puts "Prepare sleigh";;
let help_elves () = puts "Help Elves";;
let get_hitched () = puts "Get Hitched";;
let get_help () = puts "Get Help";;

You might thought right now that braces () in the end of each function are kind of usual braces like in Java, C++ etc, but actually it’s an argument of type unit of each function. Please, refer to the tutorials for more details.

Let’s take a look on our pseudo-code solutions implemented in Ocaml:

let santa_role_func c =
  c.santa_sem#wait;
  c.mutex#wait;

    if c.reindeer = 9 then (
    prepare_sleigh ();
    c.reindeer_sem#signal ~n:9 ();
    c.reindeer <- 0;
   )
  else if c.elves = 3 then (
    help_elves ();
    c.elf_sem#signal ~n:3 ()
   );

  c.mutex#signal ();;


let reindeer_role_func (c, i) =
  let s = Printf.sprintf  
"Starting reindeer (%d)" i in
  puts s;
 
  c.mutex#wait;
  c.reindeer <- c.reindeer + 1;
  if c.reindeer = 9 then c.santa_sem#signal ();
  c.mutex#signal ();

  c.reindeer_sem#wait;
  get_hitched ();;


let elves_role_func (c, i) =
  let s = Printf.sprintf 
"Starting elf [%d]" i in
  puts s;
 
  c.elf_mutex#wait;
  c.mutex#wait;
  c.elves <- c.elves + 1;
  if c.elves = 3 then
    c.santa_sem#signal ()
  else
    c.elf_mutex#signal ();
  c.mutex#signal ();
 
  c.elf_sem#wait;
  get_help ();

  c.mutex#wait;
  c.elves <- c.elves - 1;
  if c.elves = 0 then c.elf_mutex#signal ();
  c.mutex#signal ();;


You can notice that santa_role_func accepts one parameter c (our transport structure), but two others accept two parameters. It’s because Santa’s role function is running in a loop and others are running just one time. Second parameter in elves and reindeer functions is an index of a thread in which they’re running (for debug and visualization purposes).

The last (except of compilation) step to implement is to make all this stuff work together:

let c = new_santa_counters () in
let santa_loop () =
  puts "Starting Santa loop";
  while true do
    santa_role_func c;
  done
in
let santa_array = [| Thread.create santa_loop () |]
and
reindeer_array = Array.init 9 
(fun i -> Thread.create reindeer_role_func (c, i))
and
elf_array = Array.init 20 
(fun i -> Thread.create elves_role_func (c, i))
in
Array.iter Thread.join 
(Array.concat [santa_array; reindeer_array; elf_array]);;


Code above creates three arrays of threads: santa_array (which always contains just one element), reindeer_array (always contains 9 reindeer threads) and elf_array (which contains 20 (fairly chosen) elves threads). After each thread is started, main program joins all of them using humble functional magic with Array.Iter.

What had happened on the North Pole

I’ve copied typical stdout from a santa_problem solution below (and the ocaml version for the clarification).

> ocaml -version

The OCaml toplevel, version 4.01.0

> ./build.sh
> ./santa_problem
Starting santa loop
Starting reindeer (4)
Starting reindeer (5)
Starting reindeer (6)
Starting reindeer (3)
Starting reindeer (7)
Starting reindeer (8)
Starting elf [0]
Starting reindeer (2)
Starting elf [1]
Starting elf [2]
Starting elf [3]
Starting elf [4]
Starting reindeer (1)
Starting elf [5]
Starting elf [6]
Starting elf [7]
Starting elf [8]
Starting elf [9]
Starting elf [10]
Starting elf [11]
Starting elf [12]
Starting reindeer (0)
Starting elf [13]
Starting elf [14]
Starting elf [15]
Starting elf [19]
Prepare sleigh
Starting elf [16]
Starting elf [18]
Get Hitched
Get Hitched
Get Hitched
Get Hitched
Get Hitched
Get Hitched
Get Hitched
Get Hitched
Get Hitched
Starting elf [17]
Help Elves
Get Help
Get Help
Get Help
Help Elves
Get Help
Get Help
Get Help
Help Elves
Get Help
Get Help
Get Help
Help Elves
Get Help
Get Help
Get Help
Help Elves
Get Help
Get Help
Get Help
Help Elves
Get Help
Get Help
Get Help
……

Merry Xmas

Santa’s problem is one of the classical synchronization problems and is worth looking into. A lot of solutions exist nowadays. For example, there is an interesting approach to solve it in Haskell using Software Transactional Memory (STM). Despite the fact Ocaml does not provide us with as cool features as STM, we can see that building parallel programs in Ocaml is easy fun! As far as I can see, the solution above is kind of first pure Ocaml solution of Santa’s problem.
You can download all code from the Santa's gist.

12/16/2013

Introduction to Android NDK

Introduction


To develop applications for Android OS, Google offers two development packages: SDK and NDK. There are many articles and books as well as good guidelines from Google about SDK. But even Google does not provide enough materials about NDK. Among all the existing books, I would like to single out only this one, “Cinar O. - Pro Android C++ with the NDK – 2012.” 
This article is intended for those with a lack of (or some) background in Android NDK who would like to strengthen their knowledge. I will pay attention to JNI. It seems to me that I have to start just from this interface. Also, at the end, we will review a short example with two functions of file writing and reading.

What is Android NDK?


Android NDK (Native Development Kit) is a set of tools that allows you to implement a part of your application using such languages as С/С++.

When to use the NDK?


Google recommends using NDK only in rare cases. Usually, these cases are the following:
  • Necessity to increase performance (e.g. sorting of large data volumes);
  • Use of a third-party library. For example, many applications are written in the С/С++ languages and it is necessary to use the existing material. Examples of such libraries are Ffmpeg, OpenCV;
  • Programming on low level (for example, everything what goes beyond Dalvik).

What is JNI?


Java Native Interface is a standard mechanism for code execution under control of the Java Virtual Machine. The code is written in Assembler or С/С++ and assembled as dynamic libraries. It allows for the non-usage of the static binding. This provides an opportunity to call a С/С++ function from the program on Java and vice versa. 

JNI Advantages


The main competitive advantage of JNI compared to its analogues (Netscape Java Runtime Interface or Microsoft’s Raw Native Interface and COM/Java Interface) is that it was initially developed for ensuring binary compatibility, for compatibility of applications written for JNI, for any Java virtual machines on the concrete platform (while speaking about JNI, I do not mean the Dalvik machine as JNI was written by Oracle for JVM which is suitable for all Java Virtual Machines). That is the reason the compiled code on С/С++ will be executed regardless of platform. Earlier versions did not allow for the implementation of binary compatibility.   
Binary compatibility is a program compatibility type. It allows a program to work in different environments without changing its executable files.

Organization of JNI


Figure 1. – JNI – Interface pointer
The JNI table is organized like a table of virtual functions in С++. The VM can work with several such tables. For example, one will be for debugging, the other for usage. The JNI interface pointer is only valid in the current thread. This means that the pointer cannot move from one thread into another. However, native methods can be called from different threads.
Example:
jdouble Java_pkg_Cls_f__ILjava_lang_String_2 (JNIEnv *env, jobject obj, jint i, jstring s)
{
     const char *str = (*env)->GetStringUTFChars(env, s, 0); 
     (*env)->ReleaseStringUTFChars(env, s, str); 
     return 10;
}
  • *env – an interface pointer;
  • оbj – a reference to the object inside which the native method is declared;
  • i and s – passed arguments;

Primitive types are copied between the VM and native code and objects are passed by the reference. The VM should trace all references that are passed to native code. GC cannot free all references passed to native code. But at the same time native code should inform the VM that it does not need references for passed objects.

Local and Global References


JNI defines three reference types: local, global and weak global references. Local ones are valid until the method is finished. All Java objects returned by JNI functions are local references. A programmer should hope that the VM would clean all local references. Local references are available only in the thread where they were created. However, if it is necessary they can be freed at once using DeleteLocalRef the JNI method of the interface:

jclass clazz;
clazz = (*env)->FindClass(env, "java/lang/String");
...
(*env)->DeleteLocalRef(env, clazz);

Global references remain valid until they are explicitly freed. To create a global reference you have to call a NewGlobalRef method. If the global reference is unnecessary, then it can be deleted by the DeleteGlobalRef method:

jclass localClazz;
jclass globalClazz;
...
localClazz = (*env)->FindClass(env, "java/lang/String");
globalClazz = (*env)->NewGlobalRef(env, localClazz);
...
(*env)->DeleteLocalRef(env, localClazz);

Errors


JNI does not check for errors such as NullPointerException, IllegalArgumentException. Reasons:

  • decrease in performance;
  • in the most C libraries functions, it is very difficult to be protected from errors.

JNI allows for the usage of Java Exception. Most JNI functions return an error code but not Exception itself. Therefore, it is necessary to handle the code itself and throw Exception to Java. In JNI, the error code of the called functions should be checked and after that ExceptionOccurred() should be called to return an error object: 

jthrowable ExceptionOccurred(JNIEnv *env);

For example, some JNI functions of access to arrays don’t return errors. But they can call the exception ArrayIndexOutOfBoundsException or ArrayStoreException.

JNI Primitive Types


In JNI exists its own primitive and reference types of data.
Table 1. Primitive types.
Java TypeNative TypeDescription
booleanjbooleanunsigned 8 bits
bytejbytesigned 8 bits
charjcharunsigned 16 bits
shortjshortsigned 16 bits
intjintsigned 32 bits
longjlongsigned 64 bits
floatjfloat32 bits
doublejdouble64 bits
voidvoidN/A

JNI Reference Types 


Figure. 2 – JNI reference types

Modified UTF-8


The JNI uses modified UTF-8 strings to represent different string types. Java uses UTF-16. UTF-8 is mainly used in C because it encodes \u0000 as 0xc0, instead of the usual 0x00. Modified strings are encoded so that character sequences that contain only non-null ASCII characters can be represented using only one byte.  

JNI Functions: 


The JNI interface includes not only its own dataset but also its own functions. It will take a lot of time to review the dataset and functions since there are plenty of them. You can find out more information from the official documentation: http://docs.oracle.com/javase/6/docs/technotes/guides/jni/spec/functions.html

Sample of using JNI functions 


Below you will find a short example in order to make sure that you have correctly understood the material covered:

#include <jni.h>
    ...
JavaVM *jvm;
JNIEnv *env;
JavaVMInitArgs vm_args;
JavaVMOption* options = new JavaVMOption[1];
options[0].optionString = "-Djava.class.path=/usr/lib/java";
vm_args.version = JNI_VERSION_1_6;
vm_args.nOptions = 1;
vm_args.options = options;
vm_args.ignoreUnrecognized = false;
JNI_CreateJavaVM(&jvm, &env, &vm_args);
delete options;
jclass cls = env->FindClass("Main");
jmethodID mid = env->GetStaticMethodID(cls, "test", "(I)V");
env->CallStaticVoidMethod(cls, mid, 100);
jvm->DestroyJavaVM();

Let’s analyze by string:

  • JavaVM – provides an interface for calling functions which allows for the creation and removal of JavaVM;
  • JNIEnv – ensures most of the JNI functions;
  • JavaVMInitArgs – arguments for JavaVM;
  • JavaVMOption – options for JavaVM;

The JNI_CreateJavaVM() method initializes JavaVM and returns a pointer to the JNI interface pointer.
JNI_DestroyJavaVM() method loads the created JavaVM. 

Threads 


The kernel manages all the threads running on Linux; still they can be attached to the JavaVM via functions AttachCurrentThread and AttachCurrentThreadAsDaemon. If the thread is not attached, it has no access to JNIEnv. Android doesn’t stop the threads created from JNI, even if the GC is running.  The thread remains attached until it calls for the DetachCurrentThread method to detach itself from JavaVM.

First Steps


The structure of your project should look as is shown in Figure 3:
Figure. 3 – Project Structure
As Figure 3 shows, all the native code is stored to a jni folder. After a project build, the Libs folder should be separated into four subfolders. It means, one separate native library for each processor architecture. The quantity of libraries depends on the quantity of architectures selected. 
To create a native project, create a mere Android project and follow the steps: 
  • Create a jni folder –  project sources root folder with native code sources;
  • Create an Android.mk to build a project;
  • Create an Application.mk to store compilation details. It is not required but is recommended as it allows for flexible compilation setting;
  • Create an ndk-build file that will launch the compilation process (also not required).

Android.mk


As it was mentioned before, Android.mk is a makefile for native project compilation. Android.mk is used to group your code into modules. Under modules I mean statistic libraries, copied into the libs folder of your project, shared libraries and standalone executable.
Example of minimal configuration:

LOCAL_PATH := $(call my-dir)
include $(CLEAR_VARS)
LOCAL_MODULE    := NDKBegining
LOCAL_SRC_FILES := ndkBegining.c
include $(BUILD_SHARED_LIBRARY)

Let’s take a detailed look at:

  • LOCAL_PATH := $(call my-dir) – function call my-dir is used to return the path of the folder the file is called in;
  • include $(CLEAR_VARS) - cleans all the variables except LOCAL_PATH. It’s necessary to take into account as all the files are compiled in a single GNU Make context where all the variables are global;
  • LOCAL_MODULE – The name of the output module. In the above-mentioned example, the output module name is set as NDKBegining, but after the build, libNDKBeginin libraries are created in the libs folder. Android adds a lib prefix to the name, but in java code you should indicate the library name without a prefix (that is, the name should be the same as in makefiles);
  • LOCAL_SRC_FILES – listing source files to be built;
  • include $(BUILD_SHARED_LIBRARY) points type of the output module.
One may set custom variables in Android.mk; however they must stick to the following syntax: LOCAL_, PRIVATE_, NDK_, APP_, my-dir. Google recommends naming custom examples as MY_. For example:

MY_SOURCE := NDKBegining.c

To call a variable $(MY_SOURCE)
Variable can also be concatenated, for example:

LOCAL_SRC_FILES += $(MY_SOURCE)

Application.mk


This makefile defines several variables that make compilation more flexible:

  • APP_OPTIM – optional variable which is set either to release or debug. This variable is used for optimization when building an application's modules. You may manage release as debug; however debug gives more information for settings;
  • APP_BUILD_SCRI defines an alternate path to Android.mk;
  • APP_ABI – is probably one of the most essential variables. It specifies target processor architecture to compile the modules. By default, APP_ABI is set to 'armeabi', which corresponds to ARMv5TE architecture. For example, to support ARMv7, armeabi-v7a should be used; for IA-32– x86, for MIPS – mips, whereas for multiple architectures support, you should set APP_ABI := armeabi armeabi-v7a x86 mips. With NDK revision 7 and higher, you can simply set APP_ABI := all  rather than enumerating all the architectures. 
  • APP_PLATFORM – names a target platform;
  • APP_STL Android provides a very minimal libstdc++ runtime library so a developer is limited in using C++ functionality. However, APP_STL variable enables support for the extended functionality; 
  • NDK_TOOLCHAIN_VERSION – enables the selection of a GCC compiler version (which, by default, is set to 4.6)

NDK-BUILDS


ndk-build is a wrapper around GNU Make.
After the 4th revision, flags were implemented for ndk-build: 
  • clean – cleans all the generated binary files;
  • NDK_DEBUG=1 – generates a debuggable code;
  • NDK_LOG=1 – displays log messages (is used for debugging);
  • NDK_HOST_32BIT=1 – Android supports 64-bit utilities version (for example, NDK_PATH\toolchains\mipsel-linux-android-4.8\prebuilt\windows-x86_64, etc. );
  • NDK_APPLICATION_MK=<file> – indicates path to Application.mk.
In NDK revision 5, the NDK_DEBUG flag was introduced. When it is set to “1” the debug version will be built. If the flag is not set, the ndk-build by default will verify whether the attribute android:debuggable="true" is set in AndroidManifest.xml. If you are using NDK above revision 8, Google does not recommend using attribute android:debuggable in AndroidManifest.xml. (As you are using “ant debug” or building the debug version by the means of and ADT plug-in, the NDK_DEBUG=1 flag will be added automatically). 
By default, support of a 64-bit utilities version is set; however, you can force the use of a 32-bit toolchain by using NDK_HOST_32BIT=1. Still, Google recommends using 64-bit utilities to improve performance of the large programs. 

How to build a project?


It used to be a painful process. You would install CDT plug-in and download cygwin or mingw compiler; download Android NDK; configure all this stuff in Eclipse settings; and finally, it won’t work. The first time I started working with Android NDK, it took me three days to configure all these things. The problem was in Cygwin: the permission 777 should have been set to the project folder. 
Now it’s much easier. Just follow this link http://developer.android.com/sdk/index.html and download the ADT Bundle, which provides everything you need to start compiling. 

Invoke the native methods from Java code


To call native code from Java, first of all you need to define native methods in Java class. For example: 

native String nativeGetStringFromFile(String path) throws IOException;
native void nativeWriteByteArrayToFile(String path, byte[] b) throws IOException;

You should put the reserved word “native” before the method. In such a way, the compiler knows that this is an entry point in the JNI. These methods should be implemented in C/C++ files. Google also recommends starting naming methods with nativeX, where X stands for the method’s actual name.  Still, before implementing these methods manually you should generate a header file. You can perform this action either manually or using a JDK javah utility. Let’s take it a step further and not run it from the console, but rather by the standard Eclipse means.

  • Go to Eclipse and select Run-External Tools-External Tools Configuration;
  • Create new configuration;
  • Indicate the path to javah.exe from jdk in Location field (for example, C:\Program Files (x86)\Java\jdk1.6.0_35\bin\javah.exe);
  • Indicate the path to the directory bin/classes (for example, «${workspace_loc:/NDKBegin/bin/classes}») in the working directory;
  • Arguments should be populated with the following argument: “-jni ${java_type_name}” (with no inverted commas).

Now we can run it. Your header files will be stored in the bin/classes directory. 
As a next step, copy these files into the jni directory of the native project. Next, open the project’s context menu and select Android Tools – Add Native Library. This allows us to use jni.h functions. Later on you can create a cpp file (sometimes Eclipse creates it by default) and write methods bodies that have been defined in the header file.   
You won’t find here a sample of code, as I haven’t inserted it on purpose, for the sake of the article’s length and readability.  Please follow the link on GitHub if you need an example https://github.com/viacheslavtitov/NDKBegining



12/13/2013

RRD and rrdtool, sar graphs using pyrrd

Rather beautiful and ostensive graphs, don't you think? :)
This was always my weakness — representing dull statistic numbers in obvious and informative form — and if you share it with me, today I will tell you about RRD and rrdtool - rather easy and customizable tools for storing statistic data and building graphs from stored information.

Basic theory - explanation by simple example

RRD (round-robin database) — is a database, which has costant size and constant quantity of records. All the records in RRD are stored cyclically. What does it mean?
Let's imagine that you want to store results of pinging host for last 24 hours, in miliseconds. Ping command is executed every 5 minutes, so the result will be written to database every 300 seconds. A little math:
(60*60*24) / 300 = 288
This means that our database will have constant quantity of 288 rows. After the first 288 records old records, starting from the first one, will be deleted, and new ones will be added to the end of database.
Now let's complicate our task a bit. We want to save ping results for the last month. Ping command is executed every 5 minutes (300 seconds), but we don't need such an accuracy for month's statistics, so we can setup our database to insert records every hour. This means, that our database will store in cache 12 values for the last 60 minutes and will add to database one of the values: AVERAGE, MINIMUM, MAXIMUM or LAST, this depends on configuration of database.
(60*60*24*31) / 300 / 12 = 744
So, we will need 744 rows to store ping statistics per month.

Explanation in RRD "language"

First of all - rrd database must have at least one Data Source (DS). This is the structure of DS:
DS:variable_name:DST:heartbeat:min:max
  • variable_name — a name for variable that will be used to access data from database.
  • DST (Data Source Type) can be one of the next variants
    1. COUNTER — saves the rate of change; best to save data that allways increases (amount of traffic, and so on);
    2. DERIVE — same as COUNTER, but allows negative rates;
    3. ABSOLUTE - saves the rate of change, but it assumes that the previous value is set to 0; it is good if we want to count events after the last data input;
    4. GAUGE — saves the values itself, not the rate; best for storing monitoring data — CPU load, memory usage, ping miliseconds, and so on.
  • heartbeat — time between two data inputs, in seconds.
  • min and max — minimal and maximal values for input data.
So, accordingly to our example, DS will look this way:
DS:ping_ms:GAUGE:300:0:100000
Variable name is 'ping_ms', Data Source Type is GAUGE, and heartbeat is set to 300 seconds, because we execute ping command every 300 seconds. Minimal value is 0, and maximun value is set to 100000.
Also RRD must include at least one Round-robin Archive (RRA). RRA's structure:
RRA:CF:xff:step:rows
Input data comes to database through DS every 300 seconds. step parameter sets how many values of input data should be in cache before a Consolidation Function (CF) could be applied to them. rows parameter sets how many values will contain this RRA.
Let's get back to our example.
Our RRD will include 2 RRA — for last 24 hours and last 31 day. So, here is the first RRA:
RRA:AVERAGE:0.5:1:288
This means, that our first archive, which will store data for last 24 hours, will consist of 288 rows and every new value will be added to database immediately. CF can be ignored in this case, because no values will be placed in cache.
Second RRA:
RRA:AVERAGE:0.5:12:744
This archive will store info for last 31 day and will contain 744 rows. Every new value will be placed in cache. Then amount of values in cache reaches 12, CF will be applied to them. In our case CF is AVERAGE, which means that arithmetic average of values in cache will be inserted in database. Other possible CFs are:
  • MINIMUM — minimum value in cache;
  • MAXIMUM — maximum value in cache;
  • LAST — last value in cache.
In addition, if we are talking about our example, the host we are pinging can be unreachable for some time. In this case the input data will be a special RRD type called UNKNOWN. xff — is a float parameter, from 0 to 1, which sets the maximum percentage of UNKNOWN values in cache, at which the applying of CF is allowed. For our second RRA xff is set to 0.5, which means that CF will be applied only if amount of UNKNOWN values in cache is less or equal 6. xff for first RRA can be ignored.

Drawing techniques

After we have updated our RRD with some data we can easily create graphs based on data, fetched from RRD. All the practical steps are described in section below, but there are several points that you have to understand before we can create graphs:
  1. Time period, that will be represented in graph, must be defined;
  2. We must define default variable, which will be the source of data from our RRD database. In our case it is all values of ping_ms variable in RRA archive, which will be chosen automatically accordingly to a defined period of time. Here is the syntax for default variable:
    DEF:<default_variable_name>=<rrd_database_file>:<name_of_variable_in_rrd>:<consolidation_function>
    For example:
    DEF:ping_host=ping.rrd:ping_ms:AVERAGE
  3. We can add more variables, based on default variable, and apply some math to them. All the additional calculations must be written as Reverse Polish Notation (RPN). There are two types of additional variables, that we can create:
    • VDEF: consolidation function will be applied to all values in RRA. It's syntax:
      VDEF:<variable_name>=<default_variable_name>,<condolidation_function>
      For example:
      VDEF:ping_average=ping_host,AVERAGE
    • CDEF: some calculation will be applied to all values in RRA. The calculation must be written as RPN expression. CDEF's syntax:
      CDEF:<variable_name>=<rpn_expression>
      For example - if ping_host is greater then ping_average , the value will be assigned to ping_over_average:
      CDEF:ping_over_average=ping_host,ping_average,GT,ping_host,0,IF
  4. Every defined variable will be shown on a graph as a graphical element. There are 5 types of graphical elements: AREA, LINE1, LINE2, LINE3(number stands for width) and STACK. Please see the example below to understand how the variables must be connected with graphical elements.

Creating and updating database. Simple graph

Now, after some theory, let's try some practice. The main tool for working with RRD database is command-line utility rrdtool. There are modules for different languages(Perl, Python), but on closer inspection it becomes apparent that they are wrappers around rrdtool, so rrdtool is a must dependency for any RRD-module.
Let's create RRD database file, using Data Source and Round-Robin Archives from the example above. As I've already mentioned - one additional parameter is needed for creating RRD - initial start time, in unix timestamp format:
$ rrdtool create ping.rrd \
  --start 1386350100 \
  DS:ping_ms:GAUGE:300:0:100000 \
  RRA:AVERAGE:0.5:1:288 \
  RRA:AVERAGE:0.5:12:744
Several cheks, for peace of mind =):
$ file ping.rrd
  ping.rrd: RRDTool DB version 0003
$ rrdtool info ping.rrd
  filename = "ping.rrd"
  rrd_version = "0003"
  step = 300
  last_update = 1386350100
  ds[ping_ms].type = "GAUGE"
  ds[ping_ms].minimal_heartbeat = 300
  ds[ping_ms].min = 0.0000000000e+00
  ds[ping_ms].max = 1.0000000000e+05
  ds[ping_ms].last_ds = "UNKN"
  ds[ping_ms].value = 0.0000000000e+00
  ds[ping_ms].unknown_sec = 200
  rra[0].cf = "AVERAGE"
  rra[0].rows = 288
  rra[0].pdp_per_row = 1
  rra[0].xff = 5.0000000000e-01
  rra[0].cdp_prep[0].value = NaN
  rra[0].cdp_prep[0].unknown_datapoints = 0
  rra[1].cf = "AVERAGE"
  rra[1].rows = 744
  rra[1].pdp_per_row = 12
  rra[1].xff = 5.0000000000e-01
  rra[1].cdp_prep[0].value = NaN
  rra[1].cdp_prep[0].unknown_datapoints = 2
Now lets put some fake data in our RRD. $RANDOM - is a built-in bash variable(also available in ksh): each reference of this variable generates random integer number from 0 to 32767:
$ start_time="1386350100"
$ heartbeat="300"
$ for i in {1..20} \
  do \
  shift_time=$((heartbeat*i)) \
  input_time=$((start_time+shift_time)) \
  rrdtool update ping.rrd $input_time:$RANDOM \
  done
And fetch data from database, to be sure that data has arrived from Data Source:
$ last_shift=$((300*10))
$ rrdtool fetch ping.rrd AVERAGE \
  --start $start_time --end $((start_time+last_shift))
                        ping_ms

  1386350400: 1.2571000000e+04
  1386350700: 1.7505000000e+04
  1386351000: 5.7370000000e+03
  1386351300: 1.9321000000e+04
  1386351600: 6.6680000000e+03
  1386351900: 2.8170000000e+04
  1386352200: 3.1665000000e+04
  1386352500: 1.0403000000e+04
  1386352800: 3.8130000000e+03
  1386353100: 5.5630000000e+03
  1386353400: 5.3420000000e+03
Now the most interesting part - let's build graph from the values in RRD:
$ last_shift=$((300*20))
$ rrdtool graph ping_graph.png \ 
  --start $start_time --end $((start_time+last_shift)) \
  DEF:ping_host=ping.rrd:ping_ms:AVERAGE \
  VDEF:ping_average=ping_host,AVERAGE \
  CDEF:ping_over_average=ping_host,ping_average,GT,ping_host,0,IF \
  CDEF:ping_below_average=ping_host,ping_average,LT,ping_host,0,IF \
  AREA:ping_over_average#FF0000:"ping over average" \
  AREA:ping_below_average#00FF00:"ping below average" \
  LINE1:ping_host#222222 \ 
  LINE2:ping_average#FFFF00:"average time of ping"
  481x168
$ file ping_graph.png
  ping_graph.png: PNG image data, 481 x 154, 8-bit/color RGBA, non-interlaced
To make things clear:
  • output file for graph is image in PNG format;
  • time period is creation time of RRD database + last_shift, which is egual to 300s * 20;
  • default variable - is ping_ms for RRA archive, which suits the best for defined time period;
  • VDEF variable, ping_average, is constant - it is the arithmetic average of all ping_ms values in RRA archive;
  • first CDEF variable is ping_over_average - all the values which are greater then ping_average;
  • second CDEF variable is ping_below_average - all the values which are less then ping_average;
  • first graphical element: area, red colour - all the ping_over_average values, legend for graph is "ping over average";
  • second graphical element: area, green colour - all the ping_below_average values, legend for graph is "ping below average";
  • third graphical element: line, 1px width, dark grey colour - all the values of default variable;
  • fourth graphical element: line, 2px width, yellow colour - horizontal line with constant value of ping_average, legend is "average time of ping".
And the graph itself:
Easy, isn't it?

Complicated example — sar statistics, pyrrd

sar

sar — is a Unix tool for storing system activity reports. By default sar scripts, which gather information about system (CPU load, memory usage, i/o activity, etc), are launched via cron every 5 minutes and store statistic data in binary files. Here is an example output from sar file for the 17th day of current month, CPU load:
# sar -f /var/log/sa/sa17
...
12:00:01 AM       CPU     %user     %nice
12:05:01 AM       all      0.24      0.00
12:10:01 AM       all      0.25      0.00
12:15:01 AM       all      0.25      0.00 ...
...
And memory usage:
# sar -f /var/log/sa/sa17 -r
...
12:00:01 AM kbmemfree kbmemused  %memused kbbuffers
12:05:01 AM  11485772   4945420     30.10    325736
12:10:01 AM  11484664   4946528     30.10    325736
12:15:01 AM  11485508   4945684     30.10    325736 ...
...
Output of sar utility is just perfect to to be used as a source for RRD.
If you are using CentOS or RHEL, sar can be installed via yum; it is a part of sysstat package:
 # yum install sysstat
Post-installation scripts will also register jobs in cron, so you will need at least 1 hour after installation to get some reasonable output for RRD.

Setup python virtual environment

virtualenv is a tool, which creates isolated working copy of Python installation. You can work in virtual environments, install additional modules without affecting other Python installations. First of all we need to install python2.6, virtualenv and pip (a manager, which gives access to repository with python modules).
# yum install python26 python26-virtualenv python-pip
After installation we need to create virtual environment itself:
$ virtualenv-2.6 rrd_test
New python executable in rrd_test/bin/python2.6
Also creating executable in rrd_test/bin/python
Installing setuptools............done.
Now, let's activate new virtual environment:
$ . ./rrd_test/bin/activate
(rrd_test)$
Let's check the paths:
(rrd_test)$ which python2.6
~/temp/rrd_test/bin/python2.6
(rrd_test)$ which pip
~/temp/rrd_test/bin/pip
Everything seems fine, so the next step is to install pyrrd library using pip:
(rrd_test)$ pip install pyrrd
Downloading/unpacking pyrrd
  Downloading PyRRD-0.1.0.tar.gz (454Kb): 454Kb downloaded
...
Successfully installed pyrrd
Cleaning up...

Python script

I've decided that the best explanation of source code are profuse comments. Below you can see python script, which parses sar output, creates RRD databases, updates them with values from sar output, and draws graph for certain day of month.
I've called it sa2rrd.py. Please save it somewhere in virtual Python environment. Usage example:
(rrd_test)$ python2.6 sa2rrd.py --day <number_of_day>
Output PNG file will be in the directory with script and will look similar to this one:
Script's content can be found on pastebin.

Further investigation

If you are interested in the subject of the article - here are several points which may be worth attention:
  • rrdcached - daemon, which caches input data. For highload systems;
  • RRD-Simple - Perl module to manage RRD databases;
  • modifying and customizing graphs accordingly to your needs; please see the documentation;
  • RPN - theory, examples;
  • more monitoring in Unix-like systems: additional sar parameters, ps, top(htop), iostat, lsof, vmstat, mpstat, netstat, socklist, tcpdump, nmap, ettercap, nagios(by the way - nagios is very friendly to RRD) and many others.

12/10/2013

JSP Tag Library with Scala (taglib resurrection userguide)

Introduction

“What is dead may never die!”
―  George R. R. Martin, A Song of Ice and Fire

One can argue over what is dead and what is not for a long time. You can argue over anything, the main question is: is it worth doing that when you need to act? Necromancy has always been condemned, and that is actually right. But the cases are different. What would you do when you need the ‘dead’ to go?


JSTL overview

JSP Tag Library is a means of encapsulation of certain actions, which can be applied in Java Server Pages. Also, it should be mentioned that this is an effective means of code reusability, because the created tags can be used multiple times. And those are the things that may be very useful for Java developers working on the web.
It is worth noting that the library of custom tags is not quite a specific thing, but a logical supplement to JSP Standard Tag Library (although it may be vice versa). In fact, it would be strange to call something standard if there was not something non-standard. By the way, somehow it happened that when I heard about TagLib, I was not familiar with JSTL and immediately thought: "Aha, dear foreach, now I'll implement you once and for all," but it turned out that it had been already done before me (I cannot say I was very upset). That is why, not to reinvent the wheel, TagLib should be considered from the JSTL view.
To put it mildly, JSTL is an ancient technology, it appeared as early as in Java 1.4, so, if you are interested in details, the manuscripts documentation is always at your disposal. In turn, I will just give a quick overview of the things I’ve found interesting.
Thus, the standard tag library consists of several groups separated by functionality that is implemented by their components.
Core Tags
This group contains expression tags, remove tags (removing scoped variables), conditional tags, the already mentioned forEach, the redirect tag and some more tags the information about which you can find out by yourself.
Formatting tags
This group contains tags used for formatting and output of date, time and text, as well as tags dealing with localization from resources, such as the timeZone tag.
SQL tags
Hmm, don’t even know what to add. In general, the name of this group and <query>, <update> and <transaction> tags should speak for themselves.
XML tags
Pretty much the same as Core tags, but for XML. Except perhaps for the parse tag used to parse XML data from attribute or from tag body.
JSTL functions
These are not exactly tags. Or rather, these are not tags at all. This is just a set of functions used to work with string data. I think that you are familiar with such things as: contains, substring, trim ... Well, I told you.


Creating custom tag library

Generally speaking, to create your own tag library you don’t have to do much. Namely, you have to do two things:
  1. Create a .tld file, which is essentially a XML document and describes the structure of your TagLib
  2. Create a handler class that will describe the internal structure of the tag

This is in general. Now let’s consider in more detail what you need to do to come from nothing to the simplest, but working tag.

Tag Library Descriptor

A TLD should be created in the /META-INF/ directory or in any other of its subdirectories. The one, who has read the documentation or tutorial on TagLib, may note that the location of a TLD file depends on how we're going to pack the project, and will be absolutely right. The method described by me is suitable for the case when the project is packed in a JAR, but if you choose the way of WAR, then the TLD must be in the /WEB-INF/ directory. My choice is explained by the fact that, in my opinion, it’s much more logical, when the library is in a JAR file and can be connected to any project, rather than created and used only in one.
Here is a simple example of a TLD:

<?xml version="1.0" encoding="UTF-8" ?>
<taglib
     xmlns="http://java.sun.com/xml/ns/javaee"
     xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
     xsi:schemaLocation="http://java.sun.com/xml/ns/javaee
     http://java.sun.com/xml/ns/javaee/web-jsptaglibrary_2_1.xsd"
     version="2.1">
     <tlib-version>1.0</tlib-version>
     <short-name>scltg</short-name>
     <uri>http://taglib.test.eleks.com</uri>
     <tag>
          <name>smpl</name>
          <tag-class>com.eleks.test.taglib.SimpleHandler</tag-class>
          <body-content>empty</body-content>
     </tag>
</taglib>
Colored with dumpz.org
All that interests us here is the <uri> tag, the content of which will help us in declaring the library on the JSP and the <tag> tag, remarkable for its almost recursivity, and that it still describes something that we are striving for so hard. In <tag-class>, you should indicate the fully qualified name of the tag handler class.
It should be added that there is a way of describing tags by tag files, but for convenience, I’ve done what I’ve done. If anyone is interested, they can look for the alternative option by themselves.

Tag handler class

There you go. We’ve got around to it. Get ready, there will be a surprise right now.

package com.eleks.test.taglib
import java.io.IOException
import javax.servlet.jsp.tagext.SimpleTagSupport
import javax.servlet.jsp.JspException
class SimpleHandler extends SimpleTagSupport {
     @throws [IOException]
     @throws [JspException]
     override def doTag() {
          getJspContext().getOut()
               .write("Hello, I'm a simple Scala tag!");
     }
}
Colored with dumpz.org
Strange annotation, strange "override def"… Yep, it’s Scala. 
For those who are greatly worried, I’m showing the same handler code for Java:

public class SimpleHandler extends SimpleTagSupport {

     public void doTag() throws JspException, IOException {
          getJspContext().getOut()
               .write("Hello, I'm a simple Java tag!");
     }
}
Colored with dumpz.org
I inherited from SimpleTagSupport to facilitate my work. In fact, there is an array of possible outcomes, and as always you can view the documentation for details.
That's all, it seems. I will also say a few words about how it all works, so you know which way to go.
In order to combine Java and Scala, a separate Maven project was created, which, as I have said, was compiled in a jar file. You can google for tutorials, so I won’t go into details, not to increase the entropy. Ok, but as for Pom.xml ― only things concerning Scala:

<properties>
     <scala.version>2.10.2</scala.version>
</properties>
<repositories>
     <repository>
          <id>scala-tools.org</id>
          <name>Scala-Tools Maven2 Repository</name>
          <url>http://scala-tools.org/repo-releases</url>
     </repository>
</repositories>
<pluginRepositories>
     <pluginRepository>
          <id>scala-tools.org</id>
          <name>Scala-Tools Maven2 Repository</name>
          <url>http://scala-tools.org/repo-releases</url>
     </pluginRepository>
</pluginRepositories>
<dependencies>
     <dependency>
          <groupId>org.scala-lang</groupId>
          <artifactId>scala-library</artifactId>
          <version>${scala.version}</version>
     </dependency>
<dependencies>
<build>
     <sourceDirectory>src/main/scala</sourceDirectory>
     <testSourceDirectory>src/test/scala</testSourceDirectory>
     <plugins>
          <plugin>
               <groupId>org.scala-tools</groupId>
               <artifactId>maven-scala-plugin</artifactId>
               <executions>
                    <execution>
                         <phase>compile</phase>
                         <goals>
                              <goal>compile</goal>
                              <goal>testCompile</goal>
                         </goals>
                    </execution>
               </executions>
               <configuration>
                    <scalaVersion>${scala.version}</scalaVersion>
                    <args>
                         <arg>-target:jvm-1.5</arg>
                    </args>
               </configuration>
          </plugin>
     </plugins>
</build>
Colored with dumpz.org

To use the tag library I’ve created another Maven project, which stated the following in the pom.xml file:

<dependencies>
     <dependency>
          <groupId>com.eleks.test</groupId>
          <artifactId>taglib</artifactId>
          <version>0.0.1-SNAPSHOT</version>
          <scope>import</scope>
     </dependency>
</dependencies>
Colored with dumpz.org

Everything is easy.

Using on the JSP

So, here is an example of JSP using the library written by us:

<%@ page language="java" contentType="text/html; charset=ISO-8859-1"
pageEncoding="ISO-8859-1"%>
<%@ taglib uri="http://taglib.test.eleks.com" prefix="scltg" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html>
     <head>
          <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
          <title>Insert title here</title>
     </head>
     <body>
          <scltg:smpl/>
     </body>
</html>
Colored with dumpz.org
The important points are highlighted in grey. The first thing is what I’ve already mentioned — we declare our Taglib and specify a prefix, with which we will refer to tags. And the second thing is actually our tag call: prefix, colon, tag name. As you see, the tag has no body, it’s just because the following thing was declared in the TLD:
<body-content>empty</body-content>
That’s it.


Pre conclusion

It was really cool. Moreover, it works. How useful is it? Well, I don't know. It's useful the way something useful has to be. Neither more nor less.


Conclusion
“Many that live deserve death. And some that die deserve life. Can you give it to them? Then do not be too eager to deal out death in judgement.” 
― J.R.R. Tolkien, The Fellowship of the Ring

Technologies become obsolete. This is a fact. Those which had no time to become obsolete (and sometimes even just to grow into something really demanded) are forgotten, lost, discarded with outbursts : "But why do we need it, after all, there is the ‘technology name’.” The fashion in the world of IT is as whimsical as any other fashion. And there's nothing you can do about that. Not so long ago, PC came to replace huge computing machines with access through terminals, then they were followed by laptops, then by netbooks and tablets, then cloud technologies came. The circle has been closed?
Nevertheless, and even more, there is always a need to go back to something old. Why not do it using something new? Maybe I gave the answer for the case of the JSP Tag Library.