Does anyone who wasnt involved in creating these large open source math libraries have good success in using them? During my phd I tinkered with so many different PDE solvers, FEM packages and the like, and I ended up just coding everything from scratch for my specific problem. I don't know about Morpho specifically, but I often found it so difficult to understand how the authors were thinking about the problem that it wasn't worth the time to learn the ins and out of the software.
In my experience, there are usually enough differences in the downstream applications that it can be difficult to adapt these solvers to different use cases. These libraries often make subtle choices about the way data is laid out that can be hard to reconcile. I agree that not being able to read the author's mind can put you at a big disadvantage.
The libraries that are more generically useful are lower level ones like the linear algebra libraries: BLAS, LAPACK, SuiteSparse, etc. I think the key to their success is at least in part because of a shit simple common data format. A dense matrix is just a bunch of floats packed in memory... if that's your common format, you can do a lot.
Also, relative to other domains in scientific computing, there tend not to be too many ways of doing something in linear algebra. Gaussian elimination with optional full or partial pivoting is basically good enough. QR factorization, rank revealing or not.
On the other hand, even "simple" things like function approximation get complicated quickly. There is no one-size fits all solution. Do you want to do an L2 or minimax approximation? Or would you rather interpolate a function? What polynomial basis do you want to use? What degree of polynomial? Piecewise? Splines? Oh yeah, there are rationals, too...
Solving linear or nonlinear equations brings in many different iterative solvers. Maybe you need preconditioning?
Solving PDEs combines all this complication and difficulty and then throws geometry into the mix, which is even more subtle and difficult to get right than all the rest.
I think in a lot of cases writing your own stuff just means you can sidestep all this. Writing a library that is general and handles all this stuff elegantly and efficiently is really hard. Using one can be just as hard.
Edit: I think my point about the common data format also partly explains the success of "scientific Python" and MATLAB, with Julia struggling to gain a foothold. In MATLAB, everything is a matrix; in scientific Python, numpy's ndarray is the common data format. I've seen people waste a lot of time shuffling data back and forth between different Julia libraries' idiosyncratic data formats.
I tried one. As I recall, the examples were gigantic, they expected scenes to be described in XML, and it took an hour to compile.
It looked cool in demos, but there was no way to get a foothold on it as a library. I never got it to do anything useful. I only made modest progress by implementing a custom XML tag that let me get my data into one of their huge example programs.
I'm a better programmer now than I was then, but I still kinda think that software sucked.
> The Morpho language. Morpho is a programmable environment for shape optimization and scientific computing tasks more generally. Morpho aims to be:
> - Familiar. Morpho uses syntax similar to other C-family languages. The syntax fits on a postcard, so it's easy to learn.
> - Fast. Morpho programs run as efficiently as other well-implemented dynamic languages like wren or lua (Morpho is often significantly faster than Python, for example). Morpho leverages numerical libraries like BLAS, LAPACK and SUITESPARSE to provide high performance.
> - Class-based. Morpho is highly object-oriented, which simplifies coding and enables reusability.
> - Extendable. Functionality is easy to add via packages, both in Morpho and in C or other compiled languages. Packages can be downloaded, installed and distributed via the morphopm package manager.
This is likely just Journalism, but the title (that of the original article) is slightly misleading. This appears to be a programming language for modelling soft materials... although I am not sure (having looked only at the Github repo and RTD) I understand what kind of research this targets... the roadmap and API makes it seem fairly general purpose - IMO it would be nice to see some (atleast potential) usecases directly in the repository.
I don't know about this project specifically, but this kind of work is soon going to be tremendously valuable because of AI. Instead of engineers designing products directly, they will build a simulator, write an objective function, and submit this data to a general purpose ML/RL API hosted by one of the big labs. The AI will run a billion simulations and use RL to create a design that optimizes the objective function.
I'm not so sure about this. We are not even the point where auto placement and routing for PCB is there. And the reason is simple, the amount of constraints required is just too much work to put in for a person. They may as well do it themselves at that point.
I would expect most design is like this. There are thousands of constraints a designer has in the back of their head, most they are not even consciously aware of. The optimization of the objective is the trivial part. Defining the proper objective function will be very hard.
This is actually one of the scenarios where AI (I just mean machine learning) would have a real value proposition, because of the need to infer the implicit constraints from many example circuits. Figuring out all the things that people think are obvious, but that take too long to input, is kind of the thing AI is useful for.
It's been tried. PCB design is a huge industry. And it has just not worked. I was of the same opinion you have but it's not like there has not been millions and millions invested into this without real impact. Every year there is a new sway of companies that tries. Perhaps AI is now good enough, I'm not holding my breath.
It's also been tried in the mechanical world ("Generative Design" in Autodesk's language) and it's still mostly in the "cool demo, bro" phase. The parts end up being expensive and difficult to manufacture due to the unusual geometry. You're penalized for exploring the design space because it runs on cloud credits (more exploration == more cost). Just not very compelling yet.
The libraries that are more generically useful are lower level ones like the linear algebra libraries: BLAS, LAPACK, SuiteSparse, etc. I think the key to their success is at least in part because of a shit simple common data format. A dense matrix is just a bunch of floats packed in memory... if that's your common format, you can do a lot.
Also, relative to other domains in scientific computing, there tend not to be too many ways of doing something in linear algebra. Gaussian elimination with optional full or partial pivoting is basically good enough. QR factorization, rank revealing or not.
On the other hand, even "simple" things like function approximation get complicated quickly. There is no one-size fits all solution. Do you want to do an L2 or minimax approximation? Or would you rather interpolate a function? What polynomial basis do you want to use? What degree of polynomial? Piecewise? Splines? Oh yeah, there are rationals, too...
Solving linear or nonlinear equations brings in many different iterative solvers. Maybe you need preconditioning?
Solving PDEs combines all this complication and difficulty and then throws geometry into the mix, which is even more subtle and difficult to get right than all the rest.
I think in a lot of cases writing your own stuff just means you can sidestep all this. Writing a library that is general and handles all this stuff elegantly and efficiently is really hard. Using one can be just as hard.
Edit: I think my point about the common data format also partly explains the success of "scientific Python" and MATLAB, with Julia struggling to gain a foothold. In MATLAB, everything is a matrix; in scientific Python, numpy's ndarray is the common data format. I've seen people waste a lot of time shuffling data back and forth between different Julia libraries' idiosyncratic data formats.
It looked cool in demos, but there was no way to get a foothold on it as a library. I never got it to do anything useful. I only made modest progress by implementing a custom XML tag that let me get my data into one of their huge example programs.
I'm a better programmer now than I was then, but I still kinda think that software sucked.
> - Familiar. Morpho uses syntax similar to other C-family languages. The syntax fits on a postcard, so it's easy to learn.
> - Fast. Morpho programs run as efficiently as other well-implemented dynamic languages like wren or lua (Morpho is often significantly faster than Python, for example). Morpho leverages numerical libraries like BLAS, LAPACK and SUITESPARSE to provide high performance.
> - Class-based. Morpho is highly object-oriented, which simplifies coding and enables reusability.
> - Extendable. Functionality is easy to add via packages, both in Morpho and in C or other compiled languages. Packages can be downloaded, installed and distributed via the morphopm package manager.
> MIT License
> Languages: C 98.8% Python 0.7% CMake 0.5% Batchfile 0.0% Makefile 0.0% Objective-C 0.0%
https://github.com/Morpho-lang/morpho
That builds libmorpho.so, the CLI is at:
https://github.com/Morpho-lang/morpho-cli
I’m not involved in this project in any way, just trying it out because it sounds interesting.
https://www.youtube.com/watch?v=odCkR0PDKa0
Simulate Literally Everything!
I would expect most design is like this. There are thousands of constraints a designer has in the back of their head, most they are not even consciously aware of. The optimization of the objective is the trivial part. Defining the proper objective function will be very hard.
I'm not holding my breath either though.