07-16-2022, 03:17 PM
Yes the neuron structure is very simple - just data in, data worked on, data out. The part that's tricky is the weights and bias which the neuron works with. A weight on a piece of data very much depends on the accuracy of the data base from which the neuron is getting the data in. The weight of a piece of data can simply be computer calculated by summing up all the values of each item in the data base and then the weight is a simply matter of The Value of One item of data / Total Sum of the value of each individual item in the data base. But here's the rub. Lets say your data base is of parts of the body, or trees in a forest and any group of items you want to group together. Depending on the result you are looking for, do you give each item an equal weight, or do some of the items play a greater role in the outcome and therefore take a greater weight. If you don't get the outcome you are looking for, then you can tweak the weights with a bias value and start all over again.
If you get a neuron with an outcome really close to what you expect, then you make other neurons doing a different task but complimentary. Group working neurons into a Preceptron and now you're on your way to predictions or deeper clarity on how the stuff in the data base actually work together.
There is tons of stuff on the internet about Preceptrons, sigmoid function, machine learning, decision theory and game theory, regression analysis, Bayesian "Likelihood Function", decision boundaries, recurrent neural networks just to name a few topics ... you could and I do spend days just reading this stuff and trying to apply what I understand. It is a lot of fun, you get lost in it. It's one of those hobbies that justify a desk top or laptop v's an Ipad.
If you get a neuron with an outcome really close to what you expect, then you make other neurons doing a different task but complimentary. Group working neurons into a Preceptron and now you're on your way to predictions or deeper clarity on how the stuff in the data base actually work together.
There is tons of stuff on the internet about Preceptrons, sigmoid function, machine learning, decision theory and game theory, regression analysis, Bayesian "Likelihood Function", decision boundaries, recurrent neural networks just to name a few topics ... you could and I do spend days just reading this stuff and trying to apply what I understand. It is a lot of fun, you get lost in it. It's one of those hobbies that justify a desk top or laptop v's an Ipad.