Library "WIPNNetwork" this is a work in progress (WIP) and prone to have some errors, so use at your own risk... let me know if you find any issues.. Method for a generalized Neural Network. network(x) Generalized Neural Network Method. Parameters: x : TODO: add parameter x description here Returns: TODO: add what function returns
Logic is correct. But I prefer to say experimental because the sample set is narrow. (300 columns) Let's start: 6 inputs : Volume Change , Bollinger Low Band chg. , Bollinger Mid Band chg., Bollinger Up Band chg. , RSI change , MACD histogram change. 1 output : Future bar change (Historical) Training timeframe : 15 mins (Analysis TF > 4 hours (My...
Library "FunctionNNLayer" Generalized Neural Network Layer method. function(inputs, weights, n_nodes, activation_function, bias, alpha, scale) Generalized Layer. Parameters: inputs : float array, input values. weights : float array, weight values. n_nodes : int, number of nodes in layer. activation_function : string, default='sigmoid',...
Experimental NAND Perceptron based upon Python template that aims to predict NAND Gate Outputs. A Perceptron is one of the foundational building blocks of nearly all advanced Neural Network layers and models for Algo trading and Machine Learning. The goal behind this script was threefold: To prove and demonstrate that an ACTUAL working neural net can be...
Library "FunctionNNPerceptron" Perceptron Function for Neural networks. function(inputs, weights, bias, activation_function, alpha, scale) generalized perceptron node for Neural Networks. Parameters: inputs : float array, the inputs of the perceptron. weights : float array, the weights for inputs. bias : float, default=1.0, the default bias...
Library "MLActivationFunctions" Activation functions for Neural networks. binary_step(value) Basic threshold output classifier to activate/deactivate neuron. Parameters: value : float, value to process. Returns: float linear(value) Input is the same as output. Parameters: value : float, value to process. Returns: float sigmoid(value) ...
Library "MLLossFunctions" Methods for Loss functions. mse(expects, predicts) Mean Squared Error (MSE) " MSE = 1/N * sum ((y - y')^2) ". Parameters: expects : float array, expected values. predicts : float array, prediction values. Returns: float binary_cross_entropy(expects, predicts) Binary Cross-Entropy Loss (log). Parameters: ...