After MEGA 2024
I have recently attended MEGA, possibly the largest conference by number of attendants that I have ever been to.
I was lucky enough to have a submission accepted for a computation showcase, which meant that I had the great opportunity to give a talk on the very first day.
I took the chance to give possibly the first live demonstration of QuiverTools, “a package to deal with quivers and quiver representations” that I cowrote with Pieter Belmans and Hans Franzen.
Seting aside the nerve-wracking experience of running code live in front of almost one hundred people, I was happy to be able to show many cool features of our package: given a quiver and fixing a dimension vector and a stability parameter, QuiverTools can give plenty of information about the resulting quiver moduli out of the box!
One starts with building a quiver moduli space:
julia> using QuiverTools
julia> Q = mKronecker_quiver(3); M = QuiverModuliSpace(Q, [2, 3]);
Once we have an object M
we can test several of its properties, like smoothness and projectivity, trivially:
julia> dimension(M)
6
julia> is_smooth(M)
true
julia> is_projective(M)
true
Just as easily one can compute some interesting invariants:
julia> Betti_numbers(M)
13-element Vector{Int64}:
1
0
1
0
3
0
3
0
3
0
1
0
1
julia> Hodge_diamond(M)
7×7 Matrix{Int64}:
1 0 0 0 0 0 0
0 1 0 0 0 0 0
0 0 3 0 0 0 0
0 0 0 3 0 0 0
0 0 0 0 3 0 0
0 0 0 0 0 1 0
0 0 0 0 0 0 1
If one is interested in the Chow ring of this variety, our package offers some useful features:
julia> CH, CHvars = Chow_ring(M)
(Singular polynomial quotient ring (QQ),(x11,x12,x21,x22,x23),(dp(5),C), Singular.spoly{Singular.n_Q}[x11, x12, x21, x22, x23])
julia> I = QuiverTools.quotient_ideal(CH)
Singular ideal over Singular polynomial ring (QQ),(x11,x12,x21,x22,x23),(dp(5),C) with generators (x11 - x21, 3*x12*x23 - 2*x22*x23, 3*x12^2 - 3*x12*x22 + x22^2 - x21*x23, x23^3, x22*x23^2, x21*x23^2, x22^2*x23, x21*x22*x23 - 3*x23^2, 3*x21^2*x23 - 5*x22*x23, x22^3 - 3*x21*x22*x23, x21*x22^2 - x21^2*x23 - 3*x22*x23, 6*x12*x22^2 - 13*x21*x22*x23 + 9*x23^2, x21^2*x22 - 4*x12*x22 + x22^2 - x21*x23, x12*x21*x22 - 3*x22*x23, x21^3 - 4*x12*x21 + 3*x23, x12*x21^2 - 3*x12*x22 + x22^2 - x21*x23)
One can obtain standard Chern characters, the Todd class, the point class and compute their Euler characteristic:
julia> u2 = QuiverTools.Chern_character_universal_bundle(M, 2)
1//720*x21^6 + 1//120*x21^5 - 1//120*x21^4*x22 + 1//24*x21^4 - 1//24*x21^3*x22 + 1//80*x21^2*x22^2 + 1//120*x21^3*x23 + 1//6*x21^3 - 1//6*x21^2*x22 + 1//24*x21*x22^2 - 1//360*x22^3 + 1//24*x21^2*x23 - 1//60*x21*x22*x23 + 1//2*x21^2 - 1//2*x21*x22 + 1//12*x22^2 + 1//6*x21*x23 - 1//24*x22*x23 + 1//240*x23^2 + x21 - x22 + 1//2*x23 + 3
julia> Todd_class(M)
-17//8*x12*x21 + x21^2 + 823//360*x12*x22 - 823//1080*x22^2 + 553//1080*x21*x23 - 77//60*x22*x23 + x23^2 + 5//12*x12 - 3//2*x21 + 9//8*x23 + 1
julia> point_class(M)
x23^2
julia> integral(M, u2)
0
QuiverTools has many more functionalities, that one can discover either in its Julia version or in its Sagemath version.
One might wonder why there are two versions of the same software. Well.
Both versions offer the same functionalities, and they really only differ in speed: the Julia version is faster at almost everything, since it is written a compiled language. A quick experiment on a Macbook Pro M1-Pro gives the following (note that this particular method is memoized, so we have to either empty caches or run it once to get accurate runtimes):
julia> using QuiverTools, BenchmarkTools, Memoization
julia> Q, d, theta = mKronecker_quiver(3), [2, 3], [3, -2];
julia> function tobench()
all_HN_types(Q, d, theta)
Memoization.empty_all_caches!()
end
julia> @benchmark tobench()
BenchmarkTools.Trial: 10000 samples with 1 evaluation.
Range (min … max): 82.833 μs … 71.833 ms ┊ GC (min … max): 0.00% … 99.73%
Time (median): 85.375 μs ┊ GC (median): 0.00%
Time (mean ± σ): 95.266 μs ± 719.014 μs ┊ GC (mean ± σ): 9.14% ± 3.25%
▄██▇▆▄
▁▁▁▂▃▅▇██████▇▇▅▄▃▃▃▃▃▃▄▄▄▃▄▄▄▄▃▄▃▃▃▃▃▂▂▂▂▂▂▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ ▃
82.8 μs Histogram: frequency by time 95 μs <
Memory estimate: 83.27 KiB, allocs estimate: 1618.
About 85 microseconds. How does Sage fare?
sage: from quiver import *
sage: Q = KroneckerQuiver(3); d = (2, 3); theta = (3, -2);
sage: M = QuiverModuliSpace(Q, d, theta);
sage: timeit('M.all_harder_narasimhan_types()', number=1, repeat=1)
1 loop, best of 1: 26.7 ms per loop
About 25 milliseconds, i.e., three orders of magnitude slower.
This makes Julia my preferred choice for experiments running over many (read milions of) examples: what takes minutes, or hours in Julia might take days, or weeks, in Sage!
On the other hand, the support for abstract algebra in Julia is (at the time of writing) far behind what Sage offers. If I only care about one single example, waiting 0.4 seconds instead of 0.0004 seconds does not matter to me. What does matter, however, is the enormous amount of external functionalities that Sage offers, and that I can interact with immediately instead of reimplementing them as I would have to in Julia.
A concrete example of this is Schur polynomials: these are immediately available in Sage,
and the Sage version of QuiverTools uses them to simplify Chow ring computations.
Since such an implementation does not exist in Julia, the workaround we found makes the Julia
function Chow_ring()
way slower than its Sagemath counterpart. To be fair, this is also due
to the fact that on both sides we call underlying libraries for
Grobner bases computations, so the language we use does not matter as much - but again,
this goes to show that at least for now, abstract algebra packages in Julia are not as polished
as in Sage.