Now I am become Life, the protector of worlds
After almost a year of development, Arbital alpha version is finally here. While the goal is for Arbital to become an all-encompassing platform, like Wikipedia, the way to get there is by making sure we do an amazing job with each area of discussion we enter. Our first domain is Value Alignment Theory, and our first users are mostly AI safety researchers from organizations like Machine Intelligence Research Institute and FHI.
Arbital has big dreams, and feature-wise we are only about 20% there. Even so, we’ve done a pretty good job at tackling our first challenge: explaining challenging concepts. If you look at Arbital’s distinct features, like lenses, requisites and smart links, they are all in place specifically to make explanation and learning easier. This is especially important because the first problem we are tackling is so complicated. It should act as a good test to see if everything is working as we expected.
AI safety is becoming an increasingly hot topic, and given how important it is, our first priority is to make sure Arbital becomes the place to discuss AI safety research. So, while you are very welcome to use the platform for anything else, VAT is the only topic for which Arbital team will provide active support in the near future. Once we feel like we’ve done a great job within that area, and it’s sustainably growing, we’ll move to our next area (probably effective altruism).
What you can do to help:
Read through the existing content.
If you have edit permissions, please do edit the pages if you spot mistakes.
Port existing relevant content to Arbital.
Please report any bugs or annoying things you find. If you have opinions on how to improve the product, leave a comment on the appropriate page.
Have a Happy New Year, and we’ll see you in 2016!
Subscribe to Arbital’s blog to receive updates when there are new posts.
Parents:
- Arbital Blog
Stay up to date on all things Arbital