Slightly more complexity than I had thought, or RTFM!
By joe
- 8 minutes read - 1594 words[mathjax]
So I’m playing with Julia in my off time to get more proficient with it. Doing some “simple” things in preparation for the work I want to do.
One of the things I like to play with are environments linear algebra capabilities. This was one of my favorite areas as an undergraduate (cringe) years ago, and has been an important tool for me throughout my previous pre-professional career, working on a Ph.D. in physics. Indeed, my thesis was basically about constructing Hamiltonians for semiconductors under a variety of assumptions and simplifications, extracting eigenvalues and eigenvectors, and using that to construct charge density, then potentials, then potential gradients/forces, and then integrate Hellman-Feynman equations of motion. Rinse, and repeat.
Most of my old code is in Fortran, Perl4(!!!), C, with MPI, p-threads, etc. I’d always liked Matlab, but it was far too slow (back then) at doing what I needed.
So, along comes Julia. One language to bind them all.
And I want to play with it. No, I’m not working on research projects (apart from some of my own trivial fun things to keep brain in shape), I’ve got a day job at a great company. That work is very interesting, and if I believe the comments I’ve seen in public, important to the future of the various firms that appear to make up my current and future employer. But that isn’t research work. So I play on my own frankenstein boxen at night at home.
FWIW, my playing is based upon nlytiq-base, which integrates the tools I want to use in a portable way across all my environments. Right now, they build for Linux and Mac. I tried FreeBSD, but I don’t see a real use case for it right now. I did try SmartOS as well, but as I am no longer with Joyent, and don’t have a use case for that (still wish Triton worked on any OS, vmadm and related tools are hard to beat), I’ve removed the build bits for that. Maybe someday …
So, I wanted to play with Julia, and I’d noticed my little sigmoid plotting bit took quite a while to run. I know it has to compile things first time through, but I wanted to see if subsequent runs were faster.
They weren’t. This was troubling to me.
So I thought, let me try something that should be fast, should be fairly painless, with a good sanity check. I know, LU decomposition! LU decomposition is one of those techniques that takes a matrix and transforms it into 2 matricies, a product of a lower and upper triangular matrix. Triangular matrices have some fairly cool properties. It turns out that they massively simplify finding determinants.
Ok, so for a general square matrix a defined something like this
<code>n=10;
a=zeros(n,n);
for i = 1:n
for j = 1:n
a[i,j] = 1.0 + (i*j)/(sqrt(i+j));
end
end</code>
or
[latex] a[i,j] = 1 + \frac{i*j}{\sqrt{i+j}} [/latex]
The LU decomp of this array should be nice and simple
<code>using LinearAlgebra;
l,u = lu(a)</code>
Indeed when I do this, this is what I see
<code>
julia> using LinearAlgebra
julia> n=10
10
julia> a=zeros(n,n);
julia> for i = 1:n
for j = 1:n
a[i,j] = 1.0 + (i*j)/(sqrt(i+j));
end
end
julia> l,u=lu(a)</code>
<code>LU{Float64,Array{Float64,2}}
L factor:
10×10 Array{Float64,2}:
1.0 0.0 0.0 0.0 … 0.0 0.0 0.0
0.42517 1.0 0.0 0.0 0.0 0.0 0.0
0.694589 0.605103 1.0 0.0 0.0 0.0 0.0
0.536647 0.875608 0.708682 1.0 0.0 0.0 0.0
0.865448 0.269516 0.589892 -0.695697 0.0 0.0 0.0
0.622647 0.736664 0.969135 0.589964 … 0.0 0.0 0.0
0.957893 0.0842371 0.199161 -0.333167 0.0 0.0 0.0
0.757448 0.48392 0.913906 -0.43628 1.0 0.0 0.0
0.913216 0.173786 0.397498 -0.574669 -0.316149 1.0 0.0
0.813872 0.372482 0.76737 -0.661602 0.805544 -0.965203 1.0
U factor:
10×10 Array{Float64,2}:
4.01511 6.7735 9.3205 11.6904 … 21.6474 23.3607
0.0 -0.725191 -1.4628 -2.18158 -5.35779 -5.91715
0.0 0.0 -0.0532019 -0.143127 -0.809445 -0.955139
0.0 0.0 0.0 0.00397335 0.0751321 0.0950478
0.0 0.0 0.0 0.0 -0.0109584 -0.0156487
0.0 0.0 0.0 0.0 … -0.000280224 -0.000462217
0.0 0.0 0.0 0.0 -1.82359e-5 -3.79525e-5
0.0 0.0 0.0 0.0 -2.90738e-7 -8.47272e-7
0.0 0.0 0.0 0.0 1.15137e-9 6.33476e-9
0.0 0.0 0.0 0.0 0.0 4.20055e-11</code>
Look at that, an upper and lower triangular matrix! Now, a sanity check.
<code>julia> l*u-a
10×10 Array{Float64,2}:
2.30801 4.6188 6.8205 8.9016 … 16.1895 17.8014 19.3456
-0.447594 -0.845299 -1.18328 -1.47713 -2.39298 -2.58115 -2.75839
0.288854 0.582705 0.861339 1.12128 2.00133 2.19037 2.36995
-0.634154 -1.26599 -1.85229 -2.39087 -4.17796 -4.5574 -4.91695
0.433632 0.887022 1.33748 1.77565 3.36513 3.72324 4.06755
-0.767787 -1.55936 -2.32577 -3.05389 … -5.59227 -6.14851 -6.6795
0.371176 0.760538 1.15345 1.54229 3.00343 3.34188 3.66992
-0.625425 -1.28 -1.93297 -2.57094 -4.906 -5.43581 -5.94624
-0.179383 -0.36756 -0.557956 -0.746999 -1.46257 -1.62932 -1.79124
-0.747327 -1.53086 -2.3205 -3.10098 -6.02764 -6.70468 -7.36068</code>
Wait …. wut ??!??!?
LU(a) should yield a set of matrices that when multiplied together, equal the original a.
So, you’ve read this far my dear reader. You see I like simple sanity checks. Things that you just know, should work. Because, well, math!
Ok, lets try the same thing in octave.
<code>#!/usr/bin/env octave
n=10
a=zeros(n,n);
for i = 1:n
for j = 1:n
a(i,j) = 1 + (i</code><code>j)/(sqrt(i+j)) </code><code>
</code><code>endfor </code><code>
</code><code>endfor </code><code>
</code><code>[l,u]=lu(a); </code><code>
l*u-a
octave:4> [l,u]=lu(a);
octave:5> l*u-a
ans =
Columns 1 through 6:
0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00
0.0000e+00 0.0000e+00 0.0000e+00 8.8818e-16 0.0000e+00 0.0000e+00
0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00
0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00
-4.4409e-16 0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00 1.7764e-15
0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00
-4.4409e-16 0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00
0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00 -1.7764e-15 0.0000e+00
0.0000e+00 0.0000e+00 0.0000e+00 -1.7764e-15 1.7764e-15 1.7764e-15
0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00
Columns 7 through 10:
4.4409e-16 -4.4409e-16 0.0000e+00 0.0000e+00
0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00
-8.8818e-16 -1.7764e-15 1.7764e-15 0.0000e+00
0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00
0.0000e+00 -1.7764e-15 0.0000e+00 0.0000e+00
1.7764e-15 -1.7764e-15 0.0000e+00 0.0000e+00
0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00
0.0000e+00 0.0000e+00 -3.5527e-15 0.0000e+00
3.5527e-15 0.0000e+00 0.0000e+00 3.5527e-15
0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00
octave:6> </code>
That is, a matrix very close to zero. So … hmmm … what is going on. Octave is getting this right while julia isn’t.
Or is there something else going on?
Time to RTFM. For Octave … Hey look, there’s this permutation matrix I can pull out. Lets try that.
<code>octave:6> [l,u,p]=lu(a);
octave:7> p
p =
Permutation Matrix
0 0 0 0 0 0 0 0 0 1
1 0 0 0 0 0 0 0 0 0
0 0 0 1 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 1 0 0 0
0 0 1 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 1 0
0 0 0 0 1 0 0 0 0 0
0 0 0 0 0 0 0 1 0 0
0 0 0 0 0 1 0 0 0 0</code>
Ohhh … its not the identity matrix … thats a big hint. That means that … lu-pa should be nearly zero, not l*u-a …
And sure enough in Octave, when using the 3 return value form of the transform,
<code>octave:8> l *</code><code>u-p</code> * <code>a
ans =
Columns 1 through 6:
0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00
0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00
0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00
0.0000e+00 0.0000e+00 0.0000e+00 8.8818e-16 0.0000e+00 0.0000e+00
-4.4409e-16 0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00
0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00
0.0000e+00 0.0000e+00 0.0000e+00 -1.7764e-15 1.7764e-15 1.7764e-15
-4.4409e-16 0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00 1.7764e-15
0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00 -1.7764e-15 0.0000e+00
0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00
Columns 7 through 10:
0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00
4.4409e-16 -4.4409e-16 0.0000e+00 0.0000e+00
0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00
0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00
0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00
-8.8818e-16 -1.7764e-15 1.7764e-15 0.0000e+00
3.5527e-15 0.0000e+00 0.0000e+00 3.5527e-15
0.0000e+00 -1.7764e-15 0.0000e+00 0.0000e+00
0.0000e+00 0.0000e+00 -3.5527e-15 0.0000e+00
1.7764e-15 -1.7764e-15 0.0000e+00 0.0000e+00
octave:9> </code>
with l(u) as lower(upper) triangular as before. The LU transformation will do pivots and permutations as needed. You need to keep track of these.
Back to Julia.Documentation is here … and search for LinearAlgebra.lu . Scolling down a little, we see a table of what the method returns. And that little section showing how to relate L, U, and p (the permutation vector).
Ok. Armed with this, lets simplify a bit.
<code> julia> M=lu(a);
julia> M.p
10-element Array{Int64,1}:
10
1
4
2
7
3
9
5
8
6
julia> M.P
10×10 Array{Float64,2}:
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0
1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0
0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0
0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0</code>
Yup. There’s that permutation matrix for us. So … LU-Pa should be close to zero.
<code>julia> M.L</code> * <code>M.U - M.P</code> * <code>a
10×10 Array{Float64,2}:
0.0 0.0 0.0 0.0 … 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 -8.88178e-16 0.0 0.0
0.0 0.0 0.0 -8.88178e-16 0.0 0.0
-4.44089e-16 0.0 0.0 0.0 -3.55271e-15 0.0
0.0 0.0 0.0 0.0 … 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0
-4.44089e-16 0.0 0.0 0.0 -3.55271e-15 -1.77636e-15
0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 1.77636e-15 0.0 </code>
Heh … The manuals are there for a reason.
Now my exploring will move on to profiling code in Julia to try to understand why plotting is slow with DataFrames and StatsPlot.