This should be done near the final version of Audivolv and after fixing all the serious bugs. Its dangerous for a global self-modifying AI network (even with the protection I will add) to have bugs.
All code (or some permutation of it) will be filtered with a user-customizable "code string firewall", which is simply a Java function that takes a String parameter, and that String is more Java code. The code-string-firewall throws a java.lang.Exception if anything about the input code String is not known to certainly be safe.
In what ways is not not safe? Theres no way for me to prevent it from communicating to the user to do unsafe things. For example, if Audivolv became very smart, it may learn to ask the user to change the code-string-firewalll, or it may learn to ask the user to run dangerous commands.
Thats part of the "Friendly AI" problem, which is a new area of research involving AI's interactions with people and how to cause the AI to have the same goals even if its allowed to modify itself and to modify its goals.
http://en.wikipedia.org/wiki/Friendly_ai
The "code string firewall" will protect against all dangerous code, but it does not protect against Human stupidity, which is the most dangerous thing in a global AI network.
No person will be able to hack into your computer through Audivolv. By careful and simple design, Audivolv will not decrease any 1 computer's security because all communications through the internet (through stateless/connectionless UDP packets) that Audivolv receives are NEVER TRUSTED. Audivolv has no use for encryption or HTTPS. Its not possible to be more secure than not trusting any internet connections.
Audivolv will obey your privacy options (which default to complete privacy and no internet access), but if you change those options, and you choose to communicate anything on the internet through Audivolv, then the design of Audivolv will be to try extremely hard to make sure everyone who wants to receive those communications will find them. Its like everybody having the option to run their own "website" through Audivolv, except its mouse movements, sounds, generated artificial-intelligence code, etc, instead of HTML like you would find on a normal "website". Its not a website. Thats an analogy. In summary, Audivolv does not spy on you or go past your privacy options ON YOUR COMPUTER, but everything is completely public OUTSIDE YOUR COMPUTER if you choose to communicate things to OUTSIDE YOUR COMPUTER to the rest of the "Audivolv Network". If you turn on the internet option in Audivolv, and you turn on specific options for what things to communicate to the internet, it becomes public information.
TECHNICAL DETAILS:
The "Audivolv Network" will be scalable up to unlimited size because it will be organized by self-modifying AI. Low-level parts will be hard-coded for safety, because I can not allow Audivolv to "hack" or organize "denial of service" attacks or other abuses of the internet.
I'll hard-code what is allowed, as simple internet operations, and let Audivolv choose how to use them and what to use them for. That's the only safe way to do it.
Everything will be UDP packets. No HTTP. No high-level protocols. Stateless 1-packet UDP sending and receiving. Simple.
I'll need to find the practical maximum size of a UDP packet. Many parts of the internet or computers do not support UDP packets over approximately 30 kilobytes. Some more. Some less. It must be the same limit for everyone. I do not design inconsistent systems.
Until that research is done, make sure no UDP packet created by Audivolv is more than 20 kilobytes, but accept any UDP packet that you know how to accept.
This could be complicated by interactions with other people's modified versions of Audivolv, but that's how it has to be. AI will learn to work together with other versions, without people having to tell it they are a different version. The self-modifying AI code must learn about the differences automatically and adapt.
The "Audivolv Network" will be similar to a "peer to peer" network in some ways, but it will be scalable to unlimited size. It will have no problems scaling up to 6 billion people simultaneously playing mouse-music to flow unconscious Human intelligence through the internet and form a larger mind, a collective-intelligence.
Unlimited scaling because the AI will redesign the "Audivolv Network" if it is not scaling well. No Human interaction with the internet code will ever be required. Complete AI automation.
Many people do not like it when software takes people's jobs. They think "creating jobs is good". Wrong. Creating valuable things is good. Creating extra work is bad. If AI automates everything and takes all the jobs, that does not mean we will have no money. It means there will be more products than we would know what to do with. Money is just a number. Please pay attention to what matters: creating products at exponential speed, and ending big business's control over average people through unfair laws of where the money flows. I'll summarize: Everybody having lots of valuable things is good. Jobs are bad.
Audivolv will always be free and open-source GNU GPL 2+.
Also in the TECHNICAL DETAILS:
All computers in the "Audivolv Network" (which may be running different versions of Audivolv or be modified as open-source by anyone) will be interacted with the same way.
There will not be "clients" and "servers".
There will not be "basic users" and "superpeers".
There will not be any restrictions or central control.
Its all completely distributed and emergently defined. Efficiency has to come through emergent interactions at the small scale.
For all possible choices of which 90% of the computers in the Audivolv network to shut down, a large part of the remaining 10% will continue to play mouse-music and flow generated AI code through the "Audivolv Network".
I will do my best to design it so hackers can not kill the whole network.
Every part of the "Audivolv Network" will have an "off switch", but there is no central "off switch". I will try to design it so the only way to turn off a specific Audivolv on a specific computer is to turn it off from that computer, and Audivolv should try very hard to connect to the "Audivolv Network" if its connection is blocked. It is not illegal to reroute communications around something thats blocking them if the sender and receiver want the communication to happen. In other words, I do not participate in censoring, and I do not add DRM, and there is no central "off switch" and no central control. No exceptions.
Therefore extra safety is needed in the other areas.
I will make it safe and simple enough that I can explain the safety of my software designs.
I'm not going to put it up quickly and surprise anyone. Its all public information and slowly and carefully thought through in all cases, and it will not run on anyone's computer unless that person chooses to download and run Audivolv.
It is very important that Audivolv does not update its core code. I hard-coded certain parts for a reason. Audivolv gets to add new AI code to itself, but it does not get to download updated versions of its core code or self-modify that. Its for safety. If you want the next version of Audivolv, you have to go download it, and thats the only safe way to do it.