Steve Levine

Notes From a Talk on Evolving Java

Recently I attended a talk given by Brian Goetz, about evolving Java. It was refreshing to hear that Java is moving forward again from the Java Language architect himself. Like others, I have mixed feeling about how some of the features were added to the language, but overall, after hearing Brian’s talk, Java is getting the attention it needs to “keep up”.

Below you can read some of the main points from the talk. Brian’s deck can be found here.

Full disclaimer, this post is me taking notes while trying to follow along with what Brian was saying. There might be a few errors. Please let me know if you find one.

This really doesn’t do Brian’s presentation justice, but hope you find it informative about what’s coming in Java 8.

Note: I am working on a follow up post with actual running Java 8 code to demonstrate most of the features mentioned here.

A night with Brian Goetz - NY JAVA Meetup, Dec 3, 2013

  • Java 8 - A new beginning
  • Trying to get Java moving again
  • Get things moving forward without breaking backwards compatibility

Modernizing Java

  • Language
    • Lambda Expressions (closures)
    • Interfaces (default methods)
  • Libraries
    • Bulk data on collections
    • Parallelism

Lambda Expressions

  • Argument list, a return type, and a body
    • (Object o) -> o.toString
  • Can refer to enclosing values
    • (Person p) -> p.getName().equals(name)
  • Method references to an existing method
    • Object::toString()
  • Allows you to treat code as data

History

  • In 1995 most main stream languages did not support closures
  • Today, Java is the last holdout
    • C++ added them recently
    • C# added them in 3.0
    • All new languages have them

Long road

  • 1997 Ordersky - Pizza
  • 2006 - 2008 a vigorous debate, BGGA, and CICE
  • Little language evolution from Java SE 5 (2004)
  • Project coin (small language changes) in Java SE 7
  • Dec 2009 OpenJDK Lambda formed
  • Nov 2010 - JSR-335
  • Current Status
    • Lambda’s, Interface, Bulk Ops

Evolving a mature language

  • Those encouraging change
    • Adapting to changing
      • hardware, attitudes, fashions, problems
  • Those discouraging change
    • Maintain compatibility
      • Low tolerance for change that will break anything
    • Preserving the core
      • Cant alienate user base
  • Adapting to change
    • In 1995 everything was sequential, with imposed order
    • Very deterministic
  • We want to introduce things that are more parallel
  • We had the wrong defaults at the start, namely mutability
  • Hard to undo this default behavior technically as well as in people’s mindsets
Typical Iteration Example (External Iteration)
1
2
3
4
for (Shape s) {
    if (s.getColor() == RED)
        s.setColor(BLUE);
}
  • Foreach loop hides complex interactions
  • External iteration - client has to drive thus the what and how are intermingled

Inversion of Control

  • Allows libraries to be much more expressive
Lambda Example (Internal Iteration)
1
2
3
4
shapes.forEach(s -> {
    if (s.getColor() == RED)
        s.setColor(BLUE);
})
  • Internal iteration - client in charge of the what, library in charge of how

Functional Interfaces

  • Predicate<T>, Consumer<T>, Supplier<T>
  • Predicate<String> isEmpty = s -> s.isEmpty()
  • Runnable r = () -> {println("hello");

We could have added function types, but it was obvious and WRONG

  • Would have interacted badly with erasure, introducing complexity and corner cases, would have a notion of old, and new libraries
  • Better to preserve the Core
  • Bonus - existing library are now forward compatible to lambdas

Lambdas enable better APis

  • Enable more powerful API’s
  • Client-Library boundary is more permeable
  • Safer, exposes, more opportunities for optimizations
Example Higher Order Function
1
2
3
4
5
6
iterface Comparartor {
    public static<T, U extends Comparable<? super U>>
    Comparator<T> comparing(Function<T, U> f) {
        return (x, y) -> f.apply(x).compareTo(f.apply(y));
    }
}

Problem: Interface evolution

  • If you add a method to Interface, it will break all implementing libraries (obviously)
  • Source incompatible change, but binary will continue to work
  • Libraries will start looking old
  • Need a way to evolve them or replace them
  • Collections.sort() “bags nailed to side, don’t want to continue this” –BG
Interface with Default Method
1
2
3
4
5
6
interface Collection<T> {
    default void forEach(Consumer<T> action) {
        for (T t: this)
            action.apply(t);
    }
}
  • Can override it, like a virtual method
  • Consumer doesn’t know if they are using default or another implementation found in superclass chain

A question was posed asking why is the default keyword necessary? Can’t the compiler infer if there is an implementation in the IF, it is the default? “Of course it can figure it out… but we wanted extra clarity, deal with it. :) ” –BG

Some might say: “We now have multiple inheritance in Java???”

  • Java always had multiple inheritance of Types
  • This adds multiple inheritance of Behavior
    • But not of state
    • Java interface are stateless (like Fortress’s Traits)

Resolution Rule 1

  • If class can inherit a method from superclass and a superInterface, prefer the superclass
    • Defaults only considered if no method declared in superclass chain
    • True for both concrete and abstract superclass
  • Ensure compatibility with previous versions of Java

Resolution Rule 2

  • If class can inherit a method from two interfaces, and one more specific than (a subtype of) the other, prefer the more specific
    • An implementation in List would take precedence over once in Collection
  • The shape of the inheritance tree doesn’t matter

Resolution Rule 3

  • There is no rule 3!
Class inheriting behavior from two SuperInterfaces
1
2
3
4
5
6
7
8
9
10
interface A {
    default void m() {}
}
interface B {
    default void m() {}
}

class C implements A, B {
    void m() {A.super.m();}
}
  • If you inherit two superInterface implementations, you (as developer) need to disambiguate which implementation to call
  • The onus is on the developer to decide, not the compiler
Another SuperInterface Example
1
2
3
4
5
6
7
8
interface A {
    default void m() {}
}
interface B extends A {}

interface C extends B {}
// gets impl from A 
class D implements B, C {}

How Lambda’s can help

Typical Compartor Example
1
2
3
Comparator<Person> byLastName =
    Comparartor.comparing(p -> p.getLastName());
Collections.sort(people, byLastName);
  • We want code to look exactly at the problem statement
Comparing with Lambda’s
1
2
3
4
5
6
7
8
9
10
11
12
13
14
Collections.sort(people, comparing(p -> p.getLastName()));

// Option 1, use simple Lambda
people.sort(comparing(p -> p.getLastName()));

// Option 2, use Class method
people.sort(comparing(Person::getLastName));

// We can also "reverse" the Collection
people.sort(comparing(Person::getLastName).reversed());

// Or add an additional compare to the pipeline
people.sort(comparing(Person::getLastName).reversed()
    .thenComparing(Person::getFirstName));

Important thing to be able to look at code and KNOW what it does!

Example from Above
1
2
3
4
shapes.forEach(s -> {
    if (s.getColor() == RED)
        s.setColor(BLUE);
})
  • Lets say we want to massage the results of the above collection
  • Another new feature added to Collections is Streams
Manipulate all elements of a Collection after applying a Filter
1
2
3
shapes.stream()
    .filter(s -> s.getColor() == RED)
    .forEach(s -> { s.setColor(BLUE); });
Filter and Collect
1
2
3
4
List<Shape> blueBlcoks
    = shapes.stream()
        .filter(s -> s.getColor() == BLUE)
        .collect(Collections.toList())
Filter, Transform, and then Collect
1
2
3
4
5
List<Shape> blueBlcoks
    = shapes.stream()
        .filter(s -> s.getColor() == BLUE)
        .map(Shape::getContainingBox)
        .collect(Collections.toList())
Filter, Map, and then Aggregate
1
2
3
4
5
int sumOfWeights =
    = shapes.stream()
        .filter(s -> s.getColor() == BLUE)
        .mapToInt(Shape::getWeight)
        .sum();
  • Believe it (or not), these examples are not any more expensive (perhaps cheaper) than a typical for loop
  • This is possible because it does a single pass on data
    • Creates a pipeline with Filter, then Map, and then Sum invokes them
    • In other words, Filter and Map are lazy operations

Imperative vs Streams

  • Individual data vs Sets
  • Focused on How, vs What
  • Doesn’t read like problem statement vs Code reads like problem statement
  • Steps mashed together vs Well factored

Parallelism

  • Goal: Easy to use parallel libraries for Java
Parallelizing the getWeight operation is easy
1
2
3
4
5
int sumOfWeights =
    = shapes.parallelStream()
        .filter(s -> s.getColor() == BLUE)
        .mapToInt(Shape::getWeight)
        .sum();

Conclusion: So why Lambda’s? * Its about time! “All the cool kids are doing it” – BG * Provide libraries a path to multi-core (needed internal iteration) * Empower library developers

Q/A

  • Features left out, whats coming next?
    • Value Types - A long or a decimal (or other Type), that won’t need to be accessed via a pointer, instead directly from a register
    • Useful for static data

On Deciding Between PUT and POST When Creating a RESTful Resource

The goal of this post is to try to help you figure out which HTTP verb: PUT or POST is more appropriate to use when adding and updating resources.

Before we can get in to the details of which HTTP verb to use when, we first need to understand the type of “back end” or “server” or “service” we are trying to add the RESTful API to. For the purposes of this post, we will refer to the service providing the API as the “Resource Archetype”.

To start, we need to distinguish the differences between the two basic types of resource archetypes, namely, a collection and a datastore or “store” for short.

  • A collection is a server-managed directory of resources. This means that clients may propose the addition of a new resource, but it is up to the discretion of the server whether or not to add or update the requested resource. If the server decides to add the resource, it will reply to the client with an ID that is associated with the newly created resource.

For the purposes of this conversation, we are not going to get in to the details of how the server provides the ID to the client, it is out of the scope of this post.

  • A store is a client-managed resource repository. This means the client can Create Read Update and Delete resources on its own terms without any interference from the server. When interacting with a store, the client is responsible for assigning an ID to the resource and managing the workflow around that task.

These terms feel too abstract, lets walk through two concrete examples to fully illustrate the two concepts.

Firstly, lets discuss a collection. The simplest form of a collection is a middleware stack on top of a physical datastore. For our example, the physical datastore is a NoSQL database, lets say Riak. Sitting on top of Riak is Spray and Akka, which provide a RESTful middleware stack. For the purposes of this example, all you need to know about Spray and Akka is that Spray provides an API to accept RESTful requests and reply to them, and Akka provides an abstraction around the physical transport between Spray and the actual socket. The main point of all this is that Spray is acting as the middleware/business logic, and it will be handling the request and determining if Riak will be updated.

Try not to get bogged down understanding these technologies if you are not familiar with them, try instead to substitute a framework/database that you are familiar with, as it doesn’t change the context of this example.

Now lets set up our example store, which ends up being a lot simpler than the collection, mainly because there is no middleware between the physical datastore and the client. So following the example of the collection, the store would simply be a Riak instance.

Now that we have a clear understanding of the difference between a collection and store, we can now discuss the details of when to use a PUT vs a POST when creating or updating a resource.

When interacting with a store:

  • A PUT must be used to add a new resource, with the ID specified by the client.
  • A PUT must also be used to update or replace an already stored resource.

The reason you PUT to add a new resource to a store is because the client has full control over the details of the resource. The store is acting on behalf of the client. On its own, the store has no notion of what the data means. Thus it makes sense that the client has the ability to put things in the store, and then to update them, the client simply puts the resource again. This is another way of saying the request is idempotent.

When interacting with a collection:

  • A POST must be used to create a new resource, and the collection provides the ID to the client.
  • A PUT must be used to update or replace an already stored resource.

The main difference here is that since the collection potentially has a middleware layer, and some sort of business logic, the server has the ability to determine if the request is valid or not, and therefore, it is possible that the client can POST (request) the creation of the new resource, and not get back an ID, meaning the request failed. Another thing to take notice of is the fact that no ID gets passed from the client, because it is assumed that the server is handling this logic.

If you take a step back to think about our above collection example, it is possible that Spray has many different clients, besides the RESTful one. That is why it needs to manage the notion of identity. Whereas, in the store example, if Riak is exposed to the client directly without a middleware, chances are, that Riak database is meant only for the RESTful client.

Here is a additional example taken directly from the Riak documentation that also illustrates the point that when a client is providing an ID, the request should be a PUT, because the client is literally putting the resource in the store. Riak also has the capabilities, to act as a middleware, and provide the ID to the client, and of course this request is a POST.

1
2
PUT /buckets/bucket/keys/key    # User-defined key
POST /buckets/bucket/keys       # Riak-defined key

As always, please feel free to comment on this post or email me with any comments, questions, or concerns.

Thanks for reading.

WebView(Javascript) -> Native Android API

For a few weeks towards the end of last year I was developing an Android application. During that time I discovered a few new (to me) things about the Android platform. One in particular that caught my attention was having Javascript code running locally on the device call a native Android function.

Before getting into the technical details, let me first talk about a situation where this piece of functionality would be useful. Lets say (for arguments sake), that you have to write an Android application that requires advanced and very polished charts. What are you options? You can try to find a native Android implementation that meets the requirement, but I must admit, I have been there and done that, and couldn’t find any particularly good libraries. Yes there are some out there, but they didn’t have the polish I was looking for (please comment if you know of good ones).

If there aren’t any good Android libraries available what can you do? From personal experience I know there are a lot of good Javascript charting libraries out there. How can this help when developing an Android application not a web application? You can host the charting library on a server somewhere and reference them from an Android WebView? From my experience, this solution is not optimal because of slow performance. Even thought it was too slow, it still looked much better than any of the native libraries available.

Is there way to get the Javascript code to run faster perhaps by taking advantage of the beefy hardware most Android devices run on? Turns out it is quite easy to run the Javascript libraries directly on the device. After moving the Javascript code from the server to the device the performance was greatly improved. The charts rendered fast and were very responsive to the touch.

Running Javascript on the device instead of the Server is fast, but it creates a different sort of problem, namely, now you have a view (Javascript Chart) running inside another view (WebView), how does the Javascript library get its data? The obvious answer is to have the Javascript code call some (REST) service via HTTP. For arguments sake, lets say, this would not work due to the fact that the data is only available via a proprietary Java wrapped protocol. Is there a way for the Javascript code to make a Java call? There is and that is what the rest of this post is going to be about.

For simplicity, I am going to abstract away the charts and data and replace them with a simple requirement, namely, have a WebView render the underlyingAndroid SDK version.

Note: For the purposes of this post, and because I like it, I am going to use Scala as the programming language. As always, you can find all the code on Github.

The goal of this demo is to show how you can call Java from Javascript running in an Android WebView, thus we need to create a WebView, populate it with a simple html file and then enable Javascript.

MainActivity.scala - setting up basic view
1
2
3
4
5
6
7
8
  // Step 1: Create WebView 
  val webView: WebView = new WebView(this)

  // Step 2: Load page from assets
  webView loadUrl ("file:///android_asset/index.html")

  // Step 3: Enable Javascript
  webView.getSettings setJavaScriptEnabled(true)

So far, so easy. Next, we need to create a simple Scala function in our MainActivity to expose for Javascript to call. We said we wanted our view to expose the underlying Android sdk version, so lets create a function called sdkVersion()

MainActivity.scala - function to expose SDK version
1
2
3
  object jsFun {
      def sdkVersion() = android.os.Build.VERSION.SDK
  }

Next we need to make this function available to Javascript by adding it to the DOM.

MainActivity.scala - Adding a function to the DOM
1
2
3
  // Add the above function to the DOM as "Android" 
  // The function can now be invoked from Javascript with the following: Android.sdkVersion()
  webView addJavascriptInterface(jsFun, "Android")

Optional step: When developing Javascript applications it is sometimes helpfull to be able to debug something to the browser console. Believe it or not, it is quite easy to implement the browser’s console implementation with an Android one, and instead of logging the message to the browser console, it will send them to the Android Logcat system.

MainActivity.scala - Implement Javascript console.log
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
   // provide the WebView with a console.log implementation
   webView setWebChromeClient new WebChromeClient {
      override def onConsoleMessage(consoleMessage: ConsoleMessage): Boolean = {
        val msg = new StringBuilder(consoleMessage
          .messageLevel.name).append('\t')
          .append(consoleMessage.message).append('\t')
          .append(consoleMessage.sourceId).append(" (")
          .append(consoleMessage.lineNumber).append(")\n")
        if (consoleMessage.messageLevel == ConsoleMessage.MessageLevel.ERROR)
          Log.e("JavascriptExmple", msg.toString())
        else
          Log.d("JavascriptExmple", msg.toString())
        true
      }
    }

Finally we need to create our html/javascript view.

Lets create a file called index.html and place it under src/main/assets and add at least the following code to it:

index.html - default page that calls our Android.sdkVersion() function via Javascript
1
2
3
4
5
6
7
8
9
10
11
<div>Click <a href="#">here</a> to invoke an Android function to
   find out the Android SDK version used to build this App.
</div>
<div class="sdk">SDK:</div>
</body>
<script>
    console.log("This message should appear as a debug message in Logcat.");
    $("a").click(function () {
        $('.sdk').append(Android.sdkVersion());
    });
</script>

As you can see here, we are invoking the Android (Scala) method called sdkVersion() and appending the results of the call to a div using JQuery.

Thats all there is to it, now you know how to invoke an Android function from Javascript running in an Android WebView.

Mission Accomplished, Migration From Wordpress to Octopress Complete!

Someone told me about the Octopress blog engine last week, which is a blog engine based on Jekyll, the engine that powers sites such as GitHub Pages. The main difference between Octopress and plain vanilla Jekyll is with Jekyll, you have to write your own Templates, Styles, and Javacscript code, but with Octopress, it has been abstracted away. All you need to do is clone the Octopress repository and start writing posts (or migrating posts in my case). With that being said, I have spend the past week migrating my blog from Wordpress to Octopress.

Here are some of my initial thoughts, first on Wordpress:

  • Compared to when it first came out, Wordpress is now too heavy for a simple blog. It has the functionality of a content management system, and that is why sites like TechCrunch use to manage their content. They also have an army of authors creating content, vs one for my blog.

  • Every time the admin site opens, it says that an Wordpress upgrade is available. At first this was neat, a one click upgrade, but now, it is becoming a bit of a risk. A lot of effort has gone in to the site, and if for some reason the upgrade process is not clean, it can break a lot of things. Clearly, you should be backing up as you go along, but then it is no longer a one click upgrade.

  • The process of writing a post is very heavy. First you need to log in to the Admin site, start writing the post in html markup. If you want to include code in your post, there is not really a clean way to do this. To preview your post you hit preview, but most of the time the code plugins do not behave correctly thus the only real way to preview is to publish the post.

  • An internet connection is required to do anything. This is not 100% true, as you can always use a tool such as Mars Edit to write the post offline, but the preview never looks the same as it does when published on the real site. It also doesn’t understand how to preview or author code samples.

Now my thoughts on Octopress:

  • It is based on Ruby which is Cool!

  • No database required! Again, since Octopress is nothing more than a Template Engine, the end product is static html that you can copy to any host and it will be live.

  • Since Octopress is a template engine, not a blog engine, you have your entire blog including all your posts in a single project. This works out great because you can open the project in RubyMine and see your posts as Markdown, your design as Sass and CSS, and your site layout as Liquid Templates.

  • The writing process is very Agile, as you can preview the site as many times as you want without having to copy/push/deploy code anywhere. All that you need to do is cd in to your blog’s project directory and type rake preview and browse to localhost:4000

  • You can write posts in Markdown, which happens to be my favorite way of generating web content (thanks to GitHub and Stackoverflow).

  • There are a variety of great ways to embed code within a post. The first great option is to use the neat GitHub notation by surrounding your code with three tick marks. The second is to use the Liquid Template notation, which is equally nice. Lets not forget that you can include a GitHub gist VERY easily, just { gist gistIdNumber }. Can’t get much easier than that! Last but defiantly, not least is you can include a source file from the filesystem.

  • Code samples look fantastic when they are rendered in the Solarized themed code viewer! Hers’s a sample:

Tail Recursive Fibonacci
1
2
3
4
5
6
  def fib( n:Int) = fib_tr( n, 1, 0)

  def fib_tr( n: Int, b: Int, a: Int): Int = n match {
    case 0 => a
    case _ => fib_tr( n -1, a + b, b)
  }

It has been about a week and my blog is completely migrated over to Octopress now. It has been a rather enlightening experience for me because Octopress and Wordpress are so different. Bottom line is Octopress was written by and targeted at hackers, and that fact inspires me at a level that Wordpress never was able to do. I am hoping to leverage that hacker inspiration to get my self writing posts on a more regular basis starting today.

In Case You Haven’t Heard - Apple Is Not Showing Java Any More Love…

Official release notes. If you think about it it makes perfect sense for them. To them Java is no different than Flash, just a GUI platform. How many killer Java applications are there out there (besides Java IDE’s) ? Add to the fact that SunOracle is probably twisting their arm for more licensing money.

Thus, it makes perfect sense for them, why would they want to waste their resources implementing a JVM that is required by only a few applications especially when they are trying to boot strap their “App Store”? Supporting Java developers working on server software is not part of their business model.

But… This is not bad for Java developers (although lots of Java developers are showing the Apple hate right now), because the Apple JVM was never up to date, and always behind the “real” JDK implementation. Remember the Eclipse 64 bit fiasco?

My hopes moving forward is that the community comes together to come up with a completely open source version of the JVM for BSD/Mac. There are already two really good starting points, SoyLatte and OpenJDK. It would be great if Apple were to open source their JVM code base (although this is not likely due to SunOracle Licensing), can still hope.

Bottom line, if you are a Java developer you do not have to start migrating away from OS X, everything will be fine – just give it a bit of time.

Just my thoughts.

First Thoughts on My New Eee Pc (1005PE)

So far, so good with my new Eee PC (1005PE) Netbook. The machine came pre-installed with Win 7 Starter edition, so the first task for me to do was to get rid of Windows 7 and install Ubuntu Netbook on it. So I downloaded the Ubuntu Netbook 10.04 daily build image (yes, I am daring), and was on my way.

At first, this task seemed easier said than done because I was running in to the most fundamental problem possible, namely, I could not get the Eee PC to boot from the ‘bootable’ usb stick I created on my Mac Pro desktop. I checked every single bios settings, made sure that USB was chosen as priority boot drive, Still nothing. Kept getting the Windows 7 startup sound which was starting to get a bit tedious.

The next thing I thought was maybe the Cruser U3 software was causing the trouble so I went out and found a U3 uninstaller, and ran it. Still didn’t boot from the USB Stick. I then found out that in order to get the boot menu on the Eee PC, you need to hold down the Escape key while its booting. I tried that, it had the USB as a target bootable device, I selected it, but still went in to Windows 7.

At this point I was quickly running out of ideas, the only other thing I can think of was perhaps the USB stick was some how not bootable? Maybe the USB stick was not created correctly even though I followed the Ubuntu Mac instructions step by step. I downloaded the Ubuntu 10.04 daily build on to my Eee PC while booted in to Windows 7, and then downloaded this program called UNetbootin. This time I used that program to create my bootable USB stick, and then I tried to reboot again.

This time it booted in to the Live CD version of Ubuntu, yes, I was saved! I couldn’t believe that it was a bad image on my USB Stick. Why can’t a Mac create a bootable USB Stick? The strange thing is that the Eee couldn’t even read the files on the USB Stick when connected in Windows 7, but when the Stick was plugged in to my Mac Pro, I was able to see the files fine. And vise versa, once I created the USB Stick in Windows, I coulsn’t see the files on Mac. What is the deal here? I thought ISO images were platform independent?

With that being said the USB Stick problem is well in the past for me. Look for my next post where I will give my impressions of Ubuntu Netbook 10.04.

RubyMine 2 Debugging Issue Resolved

If you are trying to debug Ruby code in RubyMine 2 IDE, but are having difficulties such as, the IDE freezes after you try to step in, step over, or step next and are wondering if your configuration is wrong? It is not, if you happen to have installed the ruby-debug-ide19 gem from the command line (not from IDE), you need to patch the actual gem code to get things working nicely.

  • Open the following file with your favorite text editor (part of ruby ruby-debug-ide19 gem)
1
$GEM_HOME/ruby-debug-ide19-0.4.12/lib/ruby-debug/command.rb
  • Add the following code at line \~120 (look below for full code location):
1
return "" if str == "$FILENAME"
  • After the modifications, the code should look like:
1
2
3
4
def debug_eval(str, b = get_binding)
begin str = str.to_s
return "" if str == "$FILENAME"
max_time = 10

Thats it, you should be able to debug your Rails/Ruby code in RubyMine without issues.

With Grape, Groovy Is on Par With Native Scripting Languages

If you haven’t heard, the latest version of Groovy was released this week and included with it, among many other great features, was Grape (Groovy Advanced Packaging Engine). Grape is an annotation based dependency management system that provides functionality similar to that of Maven and Ivy with one clear advantage, namely, no build file.

If Grape doesn’t use a build file, how does it know what dependencies are necessary to run the code? Does it figure it out for you on the fly? Unfortunately, it is not that smart (yet), perhaps the next release. If it doesn’t figure it out for you, then how do you specify your dependencies? You configure your dependencies by using the@Grapes or @Grab annotations.

What is so good about being able to configure your dependencies via annotations?

If you are working with Groovy scripts, it frees you up from having to worry about dependency management and allows you to focus more on what the script needs to do much like when working with other scripting languages like Ruby or Perl. In order to clearly demonstrate the advantages of Grape, lets walk through an example.

The problem

Trying to keep up with my ever changing IP address after switching ISP’s earlier this year. There are several services running at my home that I need access to on a daily basis. If my IP changes over night, after a brown out, or for some other reason, I need to know about it asap.

In order to keep up with my IP address, I wrote a set of scripts that perform the following:

  • Obtains the current IP address of the server where it is running
  • Looks up the most recent IP address of the server in a log file
  • If the current IP address is different that the most recent IP address:
    • Updates the log file with the current IP address
    • Send the new IP address in a customizable email to a configurable address
  • If the IP address’s are the same, it does nothing.

The Solution

It took a total of three Groovy classes/scripts to solve this problem. We are not going to get in to the details of the solution because I want to stay focused on Grape.

You can find all of the code discussed in this post on github. Please feel free to download and use it. Feedback is welcome as well.

This simple Groovy class first connects to a mail server, and then sends the change of address message.

File /Users/selevine/Projects/ruby/octopress-blue64-late-2012/source/downloads/code/groovy/Mailer.groovy could not be found

The most interesting things to pay attention to are:

  • The @Grapes block after all of the imports, you can see this groovy class depends on javax.activation and javax.mail jars.
  • Thanks to Grapes, you can compile this class simply by invoking groovyc Mailer.groovy as opposed to having to configure either Maven, Gant, Ant, or some other build tool to manage the dependencies and classpath for you.
  • What’s the big deal? Read more to find out!

This next code snipped represents the “main” entry point of my solution. It simply obtains the current IP address of the machine it is running on, checks the current address against the most recent known address stored in a log file, and then uses the previous class to send an email if the IP address has changed.

File /Users/selevine/Projects/ruby/octopress-blue64-late-2012/source/downloads/code/groovy/whereIsMyIp.groovy could not be found

The most interesting thing to pay attention to in this script is:

  • The #!/usr/bin/env groovy on the first line on the script.
  • This line enables the script to be called directly from the command line like: ./whatsMyIp.groovy instead of groovy whatsMyIp.groovy

The Big Deal!

If Grape didn’t exist the only way to invoke this script would be to invoke it with a build tool such as Maven, GAnt, or some other. If a build tool didn’t suit you then you would have to invoke groovy -classpath=/path/activation.jar... and manage the dependencies there. Both of these solutions work fine, but are clunky.

If you were to solve this problem using a language such as Ruby, you would not have to worry about dependency management since Ruby is so closely integrated with the OS. You would simply run gem install some gem, and this would install the dependencies at the OS level. Thus allowing you to focus on your script and letting the Ruby runtime focus on the dependencies. Invoking ./someScript.rb is common in Ruby.

Grape gives Groovy scripts the same clean dependency abstraction. It is possible to invoke ./whatsMyIp.groovy without having to worry about any dependency management. Once the groovy runtime comes across the Grape annotations, it loads the dependencies on demand freeing the Groovy script from having to be wrapped with a dependency management layer.

This is a huge deal because now simple Groovy scripts can leverage the entire Java ecosystem from the command line without having to wrap the invocation with a build tool. Groovy Scripts are now clean, simple, and easy. I hope this inspires you to go out and convert some Ruby or Perl script to Groovy.

Time Machine Over a Network Drive

This post describes the steps involved when setting up Time Machine to backup to a Network Drive. These steps are only required if you want to back up to a device other than a Time Capsule. It is pretty quick and easy, so without further due, lets get started.

Step 1: Enable network backups in Time Machine

In a terminal window cut/paste the following command:

1
defaults write com.apple.systempreferences TMShowUnsupportedNetworkVolumes 1

Update: Steps 2 & 3 are only required if you are not running Snow Leopard. If you are, then all you need to do is mount the network drive you wish to use as a Time Machine destination, and then proceed to Step 4.

Step 2: Create Timemachine backup volume

In a terminal window cut/paste the following command:

1
hdiutil create -fs HFS+J -volname "Backup of computer-name" computer-name_[mac address without':'].sparsebundle

The simplest way to obtain your mac address is to open a terminal window and type the command: ifconfig -a, and look for the section of the output where it says: ether 00:33:44:55:66:77

The simplest way to obtain your computer name is to open a terminal window and type the command: hostname, it will return the name of your computer, example, my-hostname.

Putting it all together, based on the above examples, you would run the following command:

1
hdiutil create -fs HFS+J -volname "Backup of my-hostname" my-hostname_003344556677.sparsebundle

Step 3: Copy file created in step 2 to network Time Machine backup destination*

Using finder or terminal, copy the newly created .sparsebundle file to the place you want your Time Machine backup to reside.

Step 4: Open Timemachine preferences, and the network drive should show up as a backup target

If for some reason it doesn’t, try opening and closing the Time Machine preferences, as it may take a moment for it to detect the newly available network drive.

Step 5: Rest easy knowing your mac is now backed up to a network storage volume.

Scala Sugar - Iteration

In this second installment of Scala Sugar, lets put the lists that we created in the previous post to use.

How do we typically interact with lists when writing non-trivial programs? We iterate over them! With that being said, lets explore how iteration in Scala compares with iteration in Java.

Taking the lists in the previous post in to account, lets assign ourselves a task of iterating over each element in the list and converting them to uppercase.

First, we all know how to do this in Java using a standard for each loop:

1
(String s : l) { System.out.print(s.toUpperCase()); }

There are so many different ways to iterate in Scala, thus we are only going to talk about the most trivial ways.

1
for (s \<- l) print (s.toUpperCase())

-or-

1
l.map(_.toUpperCase()).foreach(printf("%s", _))

Download Source: simpleLists.scala

As you can see, you can loop in Scala the same way that you do in Java, namely, with a for each loop. There is nothing special about that.

The second loop is written in more a functional paradigm, as it uses the Scala map function. It allows you to iterate over the list without having to know anything about the details of the iteration itself. With Scala you are working at a much higher level. If we look at the Scala map function, it takes in a function as an argument, in this case the function is “toUpperCase()”. The map function then applies this function to all of the elements in the List, thus you don’t have to worry about the actual iteration logic. In this scenario, all the caller needs to worry about is that they have a List of elements, and they want some function f applied to all of them.

You can chain functions together on a List. In this case, we changed a foreach to the end of the map. If we were to describe the what is going on in plain english, it would sound something like, take all the elements of l, apply “toUpperCase” to all of them, then for each of them, print them.

The final interesting thing to notice in the above line of code is the “_” placeholder syntax. It looks strange to have a “_” there as part of the code, but all it is doing is acting as a placeholder for the function. It simply represents the current element of the List being operated on. Even though there are two “_“‘s in this example, they are completely independent of each other. The placeholder is a very powerful advanced concept in Scala and this example barely touches the surface of its usage. We will talk more about it in a dedicated post.

As you can see, Scala supports both the “Java” way of iterating and a pure functional way. Again, this example is just one of the many different techniques for iterating in Scala. In a future post we will look at other ways of iterating in Scala.

References

  • The code found in this post is hosted at github.com along with other sample Scala code.
Fork me on GitHub