

We try to schedule these backwards in the targets list, focusing on the highest growth servers first. The second priority is to schedule "flexihack" workers. It also spawns a small watcher script to notify the distributor when a node like this has been weakened down to the minimum level.Ĭurrently, I only do this preparation step for security level, but I should probably also grow servers before beginning to hack them. If it encounters any that are significanly more secure than their minimum security level, it will dedicate as many threads as possible among all the hosts to weakening that server. It iterates through the targets in the order the spider observed them (i.e. The first priority is to focus on weakening the weakest pending node. The new worker scheduling algorithm currently has two basic priorities. We'll be able to spend more time thinking about algorithmic improvements if we don't have to do fiddly things like managing state. Cancelling all our existing workers has some minor drawbacks in terms of performance, but what it wins us in simplicity dominates such considerations.
BITBURNER GAME DOWNLOAD CODE
Netscripts programming capabilities are some of the most challenging and inconsistent I've ever worked with, so I want to write as little complex code as possible. We cancel all existing workers because it is easier to solve this problem if you don't have to keep track of state. awaits a signal that something material has changed.cancels all existing distributor controlled workers,.The distributor is the most interesting part. It stores the hacked node list in a newline separated file, so that other scripts don't have to invoke a function or spend precious CPU time reconstructing the list. It uses a breadth first search across the nodes starting from home, hacking any nodes we have the capability to. The spider is very straightfoward, as you will see below in spider2.js. A distributor to coordinate work among the available owned servers.I designed a system with three main components: Minimize RAM usage (scheduling overhead of around 30GB).Allocate resources toward the most efficient available task, subject to some allowances for early progression.Weaken and grow first, before beginning to hack.

