Any var, private or not, is accessible if you refer to it with a fully-qualified var name (using `var`

or the `#'`

reader macro).

*src/example.clj*

```
(ns example)
(defn ^:private foo [x] (* x x))
```

*test/example_test.clj*

```
(ns example-test
(:require [clojure.test :refer :all]
[example :refer :all]))
(deftest test-foo
;; This won't work because foo is private
(is (= 4 (foo 2)))
;; Referring to the fully-qualified var *does* work.
(is (= 4 (#'example/foo 2))))
```

While I think this is perfectly OK to use for tests, it's probably a very bad idea to use this in any other context. The other takeaway is that you can't depend on private vars *actually* being private. Everything is accessible if you refer to it directly.

The list implementation itself is based on Bartosz Milewski's immutable list class (Functional Data Structures in C++: Lists) with some simplifications.

```
// Defined externally to List because it needs to be independent from the size parameter.
template <typename T>
struct Item {
Item(T v, std::shared_ptr<const Item<T>> const & tail) :
_val(v), _next(tail) {}
T _val;
std::shared_ptr<const Item<T>> _next;
};
// List type, parameterized over type and length.
template <typename T, int size>
class List {
public:
List() {}
List(T v, List<T, size-1> const & tail) :
_head(std::make_shared<Item<T>>(v, tail._head)) {}
explicit List(std::shared_ptr<const Item<T>> items) : _head(items) {}
bool isEmpty() const { return !_head; }
T front() const {
assert(!isEmpty());
return _head->_val;
}
List<T, size-1> pop_front() const {
assert(!isEmpty());
return List<T, size-1>(_head->_next);
}
List<T, size+1> push_front(T v) const {
return List<T, size+1>(v, *this);
}
// May be null.
std::shared_ptr<const Item<T>> _head;
};
```

The `List`

constructor isn't intended to be used directly, a "safe" constructor `empty`

should be used instead, which ensures that the `size`

template parameter is set to 0.

```
// Helper for constructing empty lists.
template <typename T>
List<T, 0> empty() {
return List<T, 0>();
}
```

Why is this useful? Quite simple, it helps ensure type safety. For example, if you're working on a neural network library, you'll probably need to calculate the dot product of two variable-length vectors. You could of course just use `std::vector`

and just check the lengths are equal at runtime. But we're going to go one better and enforce equal lengths at compile-time!

The function we want to implement will look something like this:

```
template <typename T, int size>
T dotProduct(List<T, size> a, List<T, size> b) {
...
}
```

Because the `size`

augments are the same, trying to take the dot product of two lists with different sizes simply won't compile. Instead you will get a lovely `cannot convert argument...`

error (or `template parameter 'size' is ambiguous`

if you let the compiler infer the types).

We're going to need two other functions first to implement this: `zipWith`

and `fold`

.

The `zipWith`

function takes a function `T(U, V)`

and two lists of of equal length, one containing elements of type `U`

, and the other elements of type `V`

. A new list is generated from the results of the function applied to each pair of elements from the lists.

```
// (u -> v -> t) -> [u] -> [v] -> [t]
template <typename T, typename U, typename V, int size>
List<T, size> zipWith(std::function<T(U, V)> f, List<U, size> us, List<V, size> vs) {
if (us.isEmpty()) {
// Use constructor directly instead of empty().
// The type checker can't infer List<T, size> ~ List<T, 0>.
return List<T, size>();
} else {
return zipWith<T, U, V, size - 1>(
f,
us.pop_front(),
vs.pop_front()).push_front(f(us.front(), vs.front()));
}
}
```

There is a slight problem with this implementation: the type checker will fail to terminate. Even though it's clear at the term level that the recursion stops when `size == 0`

, the template system keeps going, infinitely recursing to build types `List<T, -1>`

and so on.

To stop this, it's necessary to specify a specialized template for `size == 0`

. However, we only want to specialize for `size`

, leaving `T`

, `U`

and `V`

as abstract types. Unfortunately, C++ only allows partial specialization of template classes, not functions. What we would like is something like this:

```
// Invalid C++.
template <typename T, typename U, typename V>
List<T, 0> zipWith(std::function<T(U, V)>, List<U, 0> us, List<V, 0> vs) {
return empty<T>();
}
```

But until that feature finds its way into the spec, we'll have to make individual complete specializations for each combination of types. For the dot product example, we want one for `T, U, V == int`

:

```
template <>
List<int, 0> zipWith(std::function<int(int, int)>, List<int, 0> us, List<int, 0> vs) {
return empty<int>();
}
```

One of the nice things about using specialization is that the `isEmpty()`

check is no longer required in the main definition.

The `fold`

function takes a function, a starting value and a list to produce a summary value. This implementation requires a specialization for the empty list just like `zipWith`

.

```
// (u -> t -> t) -> t -> [u] -> t
template <typename T, typename U, int size>
T fold(std::function<T(U, T)> f, T acc, List<U, size> xs) {
T nextAcc = f(xs.front(), acc);
return fold<T, U, size - 1>(f, nextAcc, xs.pop_front());
}
template <>
int fold(std::function<int(int, int)> f, int acc, List<int, 0> xs) {
return acc;
}
```

So finally we can implement our typesafe `dotProduct`

:

```
template <typename T, int size>
T dotProduct(List<T, size> a, List<T, size> b) {
// Sum the element-wise products.
return fold<T, T, size>(
[](T x, T y) { return x + y; },
0,
zipWith<T, T, T, size>(
[](T x, T y) { return x * y; },
a,
b));
}
```

The full source code for this example can be found in this Gist.

]]>I have already started to expose some of the "hidden" documentation in the source code, and add some extra details where able - but there are still lots of gaps to be filled. For example, on the wiki there are a number of pages explaining a selection of Yampa's different "switch" functions - which are core to making the program "react" to events. The page regarding the standard `switch`

is the only one with *any* documentation other than the type signature and a diagram. As far as the rest go, the standard `switch`

is relatively intuitive but it's only one of a few with any documentation anywhere: in the wiki, on Hackage, or in the source code.

In order to demonstrate these functions I will be using the commonly used falling ball example as a starting point. It's based on the example used in Jekor's "Code Deconstructed" episodes on Cuboid. In the following code, a ball (represented by the type `Ball`

) falls through space from a standstill under the effect of gravitational acceleration (9.81m/s^2). The initial state of the ball is given by the top-level value `ball`

. The signal function `fallingBall`

takes the initial state and integrates the effect of gravity on the ball's velocity; and the effect of the ball's velocity on its position.

```
type Scalar = Double
type Vec3 = (Scalar, Scalar, Scalar)
type Pos = Vec3
type Vel = Vec3
type Acc = Vec3
data Ball = Ball { pos :: Pos, vel :: Vel } deriving (Show, Read, Eq)
-- Initial state.
ball :: Ball
ball = Ball (0, 10, 0) (0, 0, 0)
gravityVector :: Acc
gravityVector = (0, -9.81, 0)
-- Signal function.
fallingBall :: Ball -> SF () Ball
fallingBall initial = proc _ -> do
v <- integral >>^ (^+^ v0) -< gravityVector -- Add gravitational acceleration to velocity
p <- integral >>^ (^+^ p0) -< v -- Add velocity to position
returnA -< initial { pos = p, vel = v }
where
v0 = vel initial
p0 = pos initial
```

We will now extend this example with a signal function which makes the ball bounce when it reaches y=0. To do this, the effect of the signal function much be changed by using a switch.

In Yampa, signal functions can be swapped-out in response to events; this facilitates the "reactive" part of functional reactive programming. Switch functions create the signal functions which are able to do this. There are a few different basic switch functions: the standard `switch`

, the recurring `rSwitch`

, and the "call-with-current-continuation" `kSwitch`

. There are also parallel switches which won't be discussed in this blog post. Every switch function also has a "delayed observation" counterpart which is prefixed with a `d`

. The switching events of these functions are non-strict and their effects are not immediately observable.

We will now use each type of switch in turn to demonstrate how they work and how they differ from each other.

The standard basic switch has the following type:

```
switch :: SF in (out, Event t)
-> (t -> SF in out)
-> SF in out
```

If you're new to Yampa, this could be fairly intimidating but it's really quite simple. The wiki has this to say about it:

A switch in Yampa provides change of behavior of signal functions (SF) during runtime. The function 'switch' is the simplest form which can only be switched once. The signature is read like this: "Be a SF which is always fed a signal of type 'in' and returns a signal of type 'out'. Start with an initial SF of the same type but which may also return a transition event of type 'Event t'. In case of an Event, Yampa feeds the variable 't' to the continuation function 'k' which produces the new SF based on the variable 't', again with the same input and output types."

Here is an example of `switch`

in use.

```
-- Get y component of a given vector.
y3 :: Vec3 -> Scalar
y3 (x, y, z) = y
-- Flip y component of velocity to simulate fully-elastic collision.
bounce :: Ball -> Ball
bounce (Ball p (x, y, z)) = Ball p (x, -y, z)
update :: Ball -> SF () Ball
update initial = switch update' onBounce
where
update' = proc _ -> do
b' <- fallingBall initial -< () -- Apply falling signal function
e <- edge -< y3 (pos b') <= 0 -- Detect floor and raise event
returnA -< (b', e `tag` b') -- Return ball and event tagged with ball
onBounce ball' = update $ bounce ball' -- The new signal function to switch to
```

One of Yampa's more interesting features is the ability to "freeze" a signal function and reactivate it later. `kSwitch`

is a switch function which gives you the ability to keep the old signal function in a frozen state to use later.

```
kSwitch :: SF a b -- Update
-> SF (a,b) (Event c) -- Trigger based on input and output of update SF
-> (SF a b -> c -> SF a b) -- Generate new SF from old SF and event value
-> SF a b
```

This type signature is a little more complicated, lets break it down:

- The first argument is a signal function is the one that you want to switch
- The second signal function acts as a trigger, reading the first signal function's input and output, and outputting an event when it wants to initiate the switch
- The third argument is a function which generates the signal function to switch to. It takes the old signal function (frozen in state) and the event's value as its arguments

Other that having access to the old signal function, the other advantage of `kSwitch`

over `switch`

is that it doesn't require that the signal function being switched is kept separate from the one generating the event. The same functionality implemented above can be re-implemented with `kSwitch`

as follows:

```
-- y3 and bounce as above.
update :: Ball -> SF () Ball
update initial = kSwitch (fallingBall initial) trigger cont
where
trigger = proc (_, ball') -> do
e <- edge -< y3 (pos ball') <= 0
returnA -< e `tag` ball'
cont old e = update $ bounce e
```

In the source code, `rSwitch`

is described as a "recurring switch". As opposed to `switch`

and `kSwitch`

, `rSwitch`

takes a signal function and produces a modified one which, in addition to having it's usual input value also takes an event tagged with a new signal to switch to. This is the type signature:

```
rSwitch :: SF a b
-> SF (a, Event (SF a b)) b
```

This arrangement makes switching less flexible because the event which triggers the switch cannot come from the signal function being switched. This makes an implementation of `bounce`

using `rSwitch`

hard, if not impossible (I have not found a solution). However `rSwitch`

does have its uses. For example, to cycle through a list of signal functions, change every n seconds, there is a very elegant solution using `rSwitch`

:

```
cycler :: Time -> [SF a b] -> SF a b
cycler int sfs = proc inp -> do
e <- afterEach $ zip (repeat int) sfs -< ()
rSwitch (head sfs) -< (inp, e)
```

In this example, `afterEach`

takes a list of `(Time, a)`

pairs and produces an `Event a`

after the duration specified with each value. Used in conjunction with `cycle`

from `Prelude`

(which loops a finite list to make an infinite list), the cycle can continue forever.

```
-- Cycle between three different signal functions for all time, changing very 5 seconds.
cycler 5 $ cycle [sf1, sf2, sf3]
```

Another thing to note is that after the switch event, the switch is still in place - ready to accept any new signal functions. This is why it's a "recurring" switch. `switch`

and `kSwitch`

on the other hand require that the new signal function have its own switch defined to make the behavior repeat.

Yampa's basic switches provide three different methods of changing the running signal function in response to events:

- The simple
`switch`

which requires the signal function being switched provide an event in addition to the normal output - The "call-with-current-continuation"
`kSwitch`

which has a separate "trigger" signal function, and provides the frozen state of the old signal function which can be re-used later - The repeating
`rSwitch`

which takes the new signal function from the event triggering the switch

For my final year project at university, I developed a purely functional ray tracer in Haskell which I have since open sourced. While working on it as a project I had extremely limited flexibility with regards to adding additional features or substantially altering the design. However, now that the project is done and dusted, I have been working on a few changes I've been wanting to make for a while; one of which is a fully programmable material shader system to replace the original pre-defined set of choices for materials.

```
-- Original material type.
data Material = Shaded Colour
| Shadeless Colour
| Emissive Colour Scalar
| Reflective
| Transmissive Double Double
```

The different constructors for `Material`

were pattern-matched in a giant monolithic function which handled each material type in its own way. While I wasn't happy with this system, it served its purpose for the project, which didn't require much in the way of flexibility or extensibility. Even though HaskRay isn't intended for any sort of serious use, I felt a a more programmable system would not only be interesting but could teach me a lot at the same time. This post describes and attempts to explain the system I devised using John Hughes' arrow abstraction to create programmable materials.

If you've ever used a 3D program such as Blender or 3DS Max you've probably seen some kind of material properties panel (above). If not, it's a very simple method of defining the way a surface is rendered by choosing colours and numerical values for different elements such as a base colour, specular highlights, transparency, reflection, etc. This provides a certain intuition of what a material is to an artist, but how materials are really represented inside a ray tracer is quite different. The mathematical core of a ray traced material is the Bidirectional Reflectance Distribution Function (BRDF) - a function which defines the ratio of light reflected in the eye direction to the negative incident light direction, with respect to the surface normal (below).

What does a BRDF look like as a Haskell function? Well in order to calculate raw intensity we only require three values: view direction, light direction and the surface normal. HaskRay's `Intersection`

type encapsulates the view direction and surface normal, the light direction can be provided as a separate parameter. The type `Intersection -> Vec3 -> Scattering`

seems reasonable, and such a function can be partially applied to get a function `Vec3 -> Intensity`

which can then be mapped over multiple light sources. This type could also be made into a monad by replacing `Scattering`

with a type parameter. This would enable us to compose materials together to create more complicated ones - just like functions in the `State`

monad can be composed.

This works well enough for simple diffuse materials, but in order to create mirror or glass-like materials, or cast shadows, the BRDF must be able to trace additional rays. The trace function in HaskRay is in the `Render`

monad - a simple state monad used elsewhere in the renderer for random number generation and tracking recursive depth. Adding the HaskRay trace function into our BRDF type gives us `(Ray -> Render (Maybe (Scalar, Intersection, Scattering Colour, Bool))) -> Intersection -> Vec3 -> Render a`

which is still a monad - even though it's using another monad internally.

However, there is one more piece missing from the puzzle, and that is the ability to track which materials emit light and need to be treated as a light source. This important for the renderer to know so that it can calculate light direction vectors to pass to the BRDFs (it could assume that *every* material emits light but that would be computationally impractical). This extra piece of information must be kept along side the BRDF and preserved when composed with other materials. However, the material is no longer a monad at this point, but an arrow instead. This is what it looks like:

```
-- New material type.
data Material a b = Material {
isEmissive :: Bool,
closure :: a -> (Ray -> Render (Maybe (Scalar, Intersection, Scattering Colour, Bool))) -> Intersection -> Vec3 -> Render b
}
data Scattering a = Scattering { reflected :: a, transmitted :: a } deriving (Show, Read, Eq, Functor)
-- Also defined are instances for Applicative and Monoid.
holdout :: Scattering Colour
holdout = mempty -- Reflected and transmitted components set to black.
```

Why isn't `Material`

a monad? Looking at the type of `>>=`

makes it quite clear: `(>>=) :: Monad m => m a -> (a -> m b) -> m b`

. If the left hand side of `>>=`

is emissive then the combination with another material must also be emissive. However, if the left hand side is *not* emissive, then the new value of `isEmissive`

depends on the computation generated by the right hand side - which can't be known within the definition of `>>=`

. Not convinced? Try and implement `>>=`

for yourself!

Even though it's not a monad, `Material`

*is* an arrow! Here are the instance definitions for `Category`

and `Arrow`

:

```
instance Category Material where
id = Material False $ \inp _ _ _ -> return inp -- The identity material does nothing and is not emissive (hence False)
Material im1 cl1 . Material im2 cl2 = Material (im1 || im2) $ \inp trace int om_i -> do -- Preserve isEmissive flag by ORing
x1 <- cl2 inp trace int om_i
cl1 x1 trace int om_i
instance Arrow Material where
arr f = Material False $ \inp _ _ _ -> return $ f inp -- Lift pure function into arrow; only requires input value
first (Material im cl) = Material im $ \(x1, x2) trace int om_i -> do -- Material a b -> Material (a, c) (b, c)
r1 <- cl x1 trace int om_i
return (r1, x2)
```

The definitions are a little messy, but such is life when dealing with a type such as `Material`

. It could also be beneficial in some cased to define materials with conditional statements. To enable these, we also require an instance for `ArrowChoice`

:

```
instance ArrowChoice Material where
left (Material im cl) = Material im inner
where
inner (Left x) trace int om_i = fmap Left $ cl x trace int om_i
inner (Right x) _ _ _ = return $ Right x
```

Lets define some basic materials. My own understanding of how these functions *should* be implemented is limited at best; it's something I intend to improve later down the line. If you're not familiar with shading algorithms, feel free to gloss over these definitions:

```
-- Simple diffuse shading without shadows.
diffuseShading :: Material Colour (Scattering Colour)
diffuseShading = Material False fun
where
fun col _ (Intersection {inorm}) om_i = return $ holdout { reflected = ref }
where
ref = scale (max 0 (om_i `dot` inorm)) col
-- Primitive emissive material.
emissive :: Material (Colour, Scalar) (Scattering Colour)
emissive = Material True $ \(col, power) _ _ _ -> return $ holdout { reflected = power `scale` col }
-- Primitive reflective material.
mirror :: Material () (Scattering Colour)
mirror = proc () -> do
maxDepth <- liftRender atMaxDepth -< ()
if maxDepth
then returnA -< holdout
else do
(Intersection {ipos, inorm, iray}) <- getIntersection -< ()
traced <- traceM -< Ray ipos $ rdir iray `sub` scale (2 * (inorm `dot` rdir iray)) inorm
returnA -< maybe holdout (\(_,_,scattering,_) -> scattering) traced
```

It would also be nice to include some functions which can combine the effects of different materials. Here we define `addMaterial`

and `mixMaterial`

which operate on `Scattering`

values to blend them together.

```
-- Primitive material function which adds two Scatterings together.
addMaterial :: Material (Scattering Colour, Scattering Colour) (Scattering Colour)
addMaterial = arr $ uncurry (<>)
-- Mix two scatterings according to a given ratio.
mixMaterial :: Scalar -> Material (Scattering Colour, Scattering Colour) (Scattering Colour)
mixMaterial r = proc (s1, s2) -> do
let ri = 1-r
let s1' = (*r) <$> s1
let s2' = (*ri) <$> s2
addMaterial -< (s1', s2')
```

By composing primitive materials like `mirror`

and `transmissive`

with functions like `mixMaterial`

, more complex materials can be created. For example, a glass-like material, which adds a small amount of reflection to a refraction, can be created from these simple building blocks like so:

```
-- Adds a subtle reflection to pure transmission.
glass :: Material () (Scattering Colour)
glass = proc () -> do
m <- mirror -< ()
t <- transmissive -< (1.5, 0.9)
mixMaterial 0.5 -< (m, t)
```

All this provides an extensible framework for building arbitrarily complex materials, while enabling the renderer to recognise materials which act as light sources. I intend to continue working on this when I have time - both creating more exotic materials with this system and improving the rest of the renderer. To finish, here is a scene rendered with the materials and system just discussed.

**BRDF Diagram**(Meekohi) CC BY-SA