In the last tutorial, we built a simple type named Hello which had some static members. In this tutorial, we’ll expand our Hello type to include a constructor, an instance property, and an instance method. Adding these will allow us to create instances of Hello using the new operator:
We’ll also make Hello store some data, that means our type providers will be one step closer to awesome. Also, one step closer to being an effective means of interacting with structured data sources.
The full code for what we make in this part will be at the end of this post.
In Part 1 of this series, I briefly explained what a Type Provider was and some of the main concepts which you would need to know. In Part 2, I am going to build a very simple Type Provider. The purpose of Part 2 is to cover the basics of developing Type Providers, how to test them with F# Interactive, and the support tools which make developing Type Providers easy.
I will make a Type Provider which generates a single type named “Hello”. At first it will just have a static property which returns the string "World". Then I will add a static method which takes no parameters. Finally, I will add a static method which takes a parameter.
I’ve only been working with the F# language for the last year. Which means that all of my learning has been with version 3.x of the F# language. One of the biggest features of 3.x, and something which I have yet to work with, is the Type Provider. Type Providers dynamically generate new types, usually from some data source (e.g. databases, XML documents, web services), which a developer can use in their code. For C# developers this is analogous to the Entity Framework or the “Add Service Reference” in VS, both of which take a database or WSDL, respectively, and generate classes and functions that can be used in code. For Java this would be similar to Hibernate or wsdl2java. Just to be clear, when you create a Type Provider, what you’ve built is an Entity Framework or a wsdl2java. What F# provides is a framework for building your own Type Providers as easily as possible.
Last night was the first meet up for the San Diego Haskell Users Group. I have to say that it was a lot of fun and probably the most interesting dev meet up I’ve been to so far in San Diego. I have barely done any Haskell, beyond get the compiler setup on my computer and make “Hello, World”. As such, I learned a lot.
I reallly love how Haskell allows you to build “pipelines of data”: when you chain together several functions, put data into the top function and get the output of the bottom. This is one of my favorite things to get in F# code and Haskell just seems to make it even easier.
Generic Algebraic Data Types: something which was mentioned to me when I made a comment about doing a presentation on “Algebraic Data Types”. I have no clue what the difference between ADT and GADT is but now I have to find out.
Cabal might be the simplest project management/build management tool I’ve yet seen. Though I’ve only looked a little at Leiningen, I think, Cabal fills the same role. However, I found it much easier to read.
Having a meetup be a 15 minute walk from my apartment is the best.
Working with a system which is distributed and uses messaging for communication presents some interesting challenges. One frequent challenge I’ve gotten to deal with a few times is: how to tell what’s happening to a request as the system is processing it. A request comes into the my system, which wakes one service up and it does some work, then it sends off commands to two other services which both do some work, and then, when both are done, a final service does some work and completes the task. A little vague, but the scenario should illustrate that when trying to figure out what happened to the initial request, I’ve got to dig through at least 4 services worth of logs. That’s assuming everything has only one instance; multiple instances on multiple servers and it becomes a huge chore.
The solution to this is fairly simple: use a log aggregator like splunk or roll your own with ElasticSearch. However, I want to have some fun and learn something new and this is a perfect situation for learning and experimentation: the problem isn’t that complex and if I get the solution wrong, no one really cares, so the risk is low. What I decided to do was build something up using RavenDB and its built in MapReduce index system.
A final round of polish. Now that I have the layout and flow for using my RabbitMQ library defined, it’s time to go through and do a bit of clean up on my names. There’s a lot I can do to make it so that code you write with this library becomes as readable and literate as possible.
Here’s the code you write to do the initial setup:
If I just look at this, I have to ask: open connection to what? Context would probably help, but this function will get called only once in an entire application, so there’s not much reason to hold back on the name. I like the fluent style of naming, so I’ll go with:
Why have a function which manages both the read AND the writes for a channel? Why not split the read and write out to their own functions? This is better in my opinion for one very obvious reason: the code will explicitly explain what is happening. With my current createQueueFuntions function, there is nothing which tells you that you get a tuple and that the first element in the tuple is a write function and the second element is a read function.
In my previous post, I made my RabbitMQ client library a bit more functional by removing the Queue record type and replacing it with higher order functions. These higher order functions are used for creating functions for reading/writing queues. If you want to use “MyQueue” for writing, you use the “writeTo” higher order function to create a function for writing to “MyQueue”. It’s sounds more complex than it really is.
I did that because I mentioned two things about my initial effort which bothered me: it wasn’t functional enough and it didn’t support RabbitMQ consumers. I got the first taken care of. Now I am going to get the second.
I will follow the same higher order function approach:
I now have a functioning RabbitMQ Library! Though, there is a lot more to be done to make it satisfactory.
There are two problems:
Missing the Queue Consumer functionality. This makes it a lot easier to deal with RabbitMQ so I definitely want to get this in.
I’m not happy with using the record type to capture the Read and Publish functions for a queue. After all, how often are you going to be writing to and reading from a specific queue in the same process?