|
Post by bluatigro on Feb 6, 2020 13:33:55 GMT
error : syntacks error [ debugging not working ] [ so i cant fint it ]
''bluatigro 6 feb 2020 ''https://towardsdatascience.com/simple-neural-network-implementation-in-c-663f51447547?gi=8296e2684a4b
lr = 0.1 numInputs = 2 numHiddenNodes = 2 numOutputs = 1
dim hiddenLayer(numHiddenNodes-1) dim outputLayer(numOutputs-1) dim hiddenLayerBias(numHiddenNodes-1) dim outputLayerBias(numOutputs-1) dim hiddenWeights(numInputs-1,numHiddenNodes-1) dim outputWeights(numHiddenNodes-1,numOutputs-1)
for i=0 to numHiddenNodes-1 for j=0 to numInputs-1 hiddenWeights(j,i)=rnd(0) next j hiddenLayerBias(i)=rnd(0) next i
for i=0 to numOutputs-1 for j=0 to numHiddenNodes-1 outputWeights(j,i)=rnd(0) next j outputLayerBias(i)=rnd(0) next i
''training set numTrainingSets = 4 dim training.inputs(numTrainingSets-1,numInputs-1) for i = 0 to numTrainingSets-1 for j = 0 to numInputs-1 read a training.inputs( i , j ) next j next i data 0.0, 0.0 data 1.0, 0.0 data 0.0, 1.0 data 1.0, 1.0
dim training.outputs(numTrainingSets-1,numOutputs-1) for i = 0 to numTrainingSets-1 for j = 0 to numOutputs-1 read a training.outputs(i,j) next j next i data 0.0 data 1.0 data 1.0 data 0.0
'' Iterate through the entire training for a number of epochs dim deltaHidden(numHiddenNodes-1) dim deltaOutput(numOutputs-1) dim trainingSetOrder(numTrainingSets-1) for i = 0 to numTrainingSets - 1 trainingSetOrder( i ) = i next i for n=0 to 2e4
'' As per SGD, shuffle the order of the training set for i = 0 to numTrainingSets - 1 dice = int( rnd(0) * numTrainingSets ) help = trainingSetOrder( dice ) trainingSetOrder( dice ) = trainingSetOrder( i ) trainingSetOrder( i ) = help next i
'' Cycle through each of the training set elements for x=0 to numTrainingSets-1 i = trainingSetOrder(x)
'' Compute hidden layer activation for j=0 to numHiddenNodes-1 activation = hiddenLayerBias(j) for k=0 to numInputs-1 activation = activation _ + training.inputs(i,k) * hiddenWeights(k,j) next k hiddenLayer(j) = signoid(activation) next j
'' Compute output layer activation for j=0 to numOutputs-1 activation = outputLayerBias(j) for k=0 to numHiddenNodes-1 '<<<<<<< -k< activation = activation _ + hiddenLayer(k) * outputWeights(k,j) next k outputLayer(j) = signoid(activation) next j
'' Compute change in output weights for j=0 to numOutputs-1 dError = (training_outputs(i,j) - outputLayer(j)) deltaOutput(j) = dError * dSignoid(outputLayer(j)) next j
'' Compute change in hidden weights for j=0 to numHiddenNodes-1 dError = 0.0 for k=0 to numOutputs-1 dError = dError + deltaOutput(k) * outputWeights(j,k) next k deltaHidden(j) = dError * dSignoid(hiddenLayer(j)) next j
'' Apply change in output weights for j=0 to numOutputs-1 outputLayerBias(j) = outputLayerBias(j) _ + deltaOutput(j) * lr for k=0 to numHiddenNodes-1 outputWeights(k,j) = outputWeights(k,j) _ + hiddenLayer(k) * deltaOutput(j) * lr next k next j
'' Apply change in hidden weights for j=0 to numHiddenNodes-1 hiddenLayerBias(j) = hiddenLayerBias(j) _ + deltaHidden(j) * lr for k=0 to numInputs-1 hiddenWeights(k,j) = hiddenWeights(k,j) _ + training.inputs(i,k) * deltaHidden(j) * lr next k next j
next x '<<<<<<<
fout = 0.0 for i = 0 to numOutputs-1 fout += abs( deltaOutput( i ) ) next i
if n mod 1000 = 0 then print n , fout end if
next n print "[ end game ]" end
function signoid( x as double ) as double return 1 / ( 1 + exp( -x ) ) end function
function dsignoid( x as double ) as double return x * ( 1 - x ) end function
|
|
ntech
Junior Member
Posts: 99
|
Post by ntech on Feb 6, 2020 16:02:26 GMT
Perhaps it would be easier to have one array for each layer, as a string array which would contain the weights and bias in a string, delimited by some sort of character, and have a function to extract a field when wanted.
I think I've found one of the syntax errors (it occurs a couple times):
''training set numTrainingSets = 4 dim training.inputs(numTrainingSets-1,numInputs-1) for i = 0 to numTrainingSets-1 for j = 0 to numInputs-1 read a training.inputs( i , j ) '<---- HERE IS THE ERROR next j next i
|
|
|
Post by tenochtitlanuk on Feb 6, 2020 17:32:37 GMT
Several other errors. The read and assignation was wrong in several places LB has no unitary minus Defining functions not changed to LB syntax Changed the variable names that use '.' there was one using '_' too
My version below runs but I don't know what the output means. I'm afraid I entered a lot of spacing to make it readable for me- but if you search for lines with
<<<< you'll find what I changed. I think for someone with sight dificulties to tackle thing like this is fantastic. Keep us in the loop as you polish this!
johnf
''bluatigro 6 feb 2020 ''https://towardsdatascience.com/simple-neural-network-implementation-in-c-663f51447547?gi=8296e2684a4b
lr = 0.1 numInputs = 2 numHiddenNodes = 2 numOutputs = 1
dim hiddenLayer( numHiddenNodes -1) dim outputLayer( numOutputs -1) dim hiddenLayerBias( numHiddenNodes -1) dim outputLayerBias( numOutputs -1) dim hiddenWeights( numInputs -1, numHiddenNodes -1) dim outputWeights( numHiddenNodes -1, numOutputs -1)
for i =0 to numHiddenNodes -1 for j =0 to numInputs -1 hiddenWeights( j, i) =rnd( 0) next j
hiddenLayerBias( i) =rnd( 0) next i
for i =0 to numOutputs -1 for j =0 to numHiddenNodes -1 outputWeights( j, i) =rnd( 0) next j
outputLayerBias( i) =rnd( 0) next i
''training set numTrainingSets = 4
dim trainingInputs( numTrainingSets -1, numInputs -1)
for i = 0 to numTrainingSets -1 for j = 0 to numInputs -1 read a trainingInputs( i , j ) =a ' <<<< missing =a next j next i
data 0.0, 0.0 data 1.0, 0.0 data 0.0, 1.0 data 1.0, 1.0
dim trainingOutputs( numTrainingSets -1, numOutputs -1) for i = 0 to numTrainingSets -1 for j = 0 to numOutputs -1 read a trainingOutputs( i, j) =a ' missing '=a' next j next i
data 0.0 data 1.0 data 1.0 data 0.0
'' Iterate through the entire training for a number of epochs dim deltaHidden( numHiddenNodes -1) dim deltaOutput( numOutputs -1) dim trainingSetOrder( numTrainingSets -1)
for i = 0 to numTrainingSets - 1 trainingSetOrder( i ) = i next i
for n =0 to 2e4
'' As per SGD, shuffle the order of the training set for i = 0 to numTrainingSets - 1 dice = int( rnd( 0) * numTrainingSets ) help = trainingSetOrder( dice ) trainingSetOrder( dice ) = trainingSetOrder( i ) trainingSetOrder( i ) = help next i
'' Cycle through each of the training set elements for x =0 to numTrainingSets -1 i = trainingSetOrder( x)
'' Compute hidden layer activation for j =0 to numHiddenNodes -1 activation = hiddenLayerBias( j)
for k =0 to numInputs -1 activation = activation _ + trainingInputs( i, k) * hiddenWeights( k,j ) next k
hiddenLayer( j) = signoid( activation) next j
'' Compute output layer activation for j =0 to numOutputs -1 activation = outputLayerBias( j)
for k =0 to numHiddenNodes -1 '<<<<<<< -k< activation = activation _ + hiddenLayer( k) * outputWeights( k, j) next k
outputLayer( j) = signoid( activation) next j
'' Compute change in output weights for j =0 to numOutputs-1 dError = ( trainingOutputs( i, j) - outputLayer( j)) ' ?? training_outputs ' <<<<<<<<< deltaOutput( j) = dError * dSignoid( outputLayer( j)) next j
'' Compute change in hidden weights for j =0 to numHiddenNodes -1 dError = 0.0
for k =0 to numOutputs -1 dError = dError + deltaOutput( k) * outputWeights( j, k) next k
deltaHidden( j) = dError * dSignoid( hiddenLayer( j)) next j
'' Apply change in output weights for j =0 to numOutputs -1 outputLayerBias( j) = outputLayerBias( j) _ + deltaOutput( j) * lr
for k =0 to numHiddenNodes -1 outputWeights( k, j) = outputWeights( k, j) _ + hiddenLayer( k) * deltaOutput( j) * lr next k
next j
'' Apply change in hidden weights for j =0 to numHiddenNodes-1 hiddenLayerBias( j) = hiddenLayerBias( j) _ + deltaHidden( j) * lr
for k =0 to numInputs- 1 hiddenWeights( k, j) = hiddenWeights( k, j) _ + trainingInputs( i, k) * deltaHidden( j) * lr next k
next j
next x '<<<<<<<
fout = 0.0
for i = 0 to numOutputs -1 fout = fout +abs( deltaOutput( i ) ) ' <<<<<<< no += yet in LB next i
if n mod 1000 = 0 then print n , fout end if
next n print "[ end game ]" end
function signoid( x) ' <<<<<<< signoid = 1 / ( 1 + exp( 0 -x ) ) ' LB's lack of unitary minus end function
function dsignoid( x) ' <<<<<<< dsignoid = x * ( 1 - x ) end function
PS did you ever get the quasicrystal one to work??
|
|
|
Post by bluatigro on Feb 7, 2020 12:38:01 GMT
the number it prints is the overal error the net makes
the quasicrytal i have made some time in qbasic i can not fint the code animore
i m grateful about the help i get whit this i did not fint jet a NN example whit more than one hidden layer if we have that we can do deep learing
i stil do not grasp this NN buisnes completly
|
|
|
Post by tenochtitlanuk on Feb 7, 2020 15:54:27 GMT
|
|
|
Post by tenochtitlanuk on Feb 7, 2020 20:41:55 GMT
I'm concerned about the arrays.
Since numOutputs = 1 then dim outputLayer( numOutputs -1) is trying to create a 1D array with zero terms. I'd have expected an error message..
EDIT
numOutputs = 1 dim outputLayer( numOutputs -1) ' is trying to create a 1D array with zero terms. outputLayer( 0) =666 print outputLayer( 0)
- actually works- a zero-dimension array is equivalent to a constant!
|
|
|
Post by tenochtitlanuk on Feb 9, 2020 12:38:03 GMT
I missed the important error- you have a mixture of 'signoid' and 'Signoid'- two capital/lower case errors to change. ( It should actually be 'sigmoid' with an 'm' not 'n' as well) I only noticed after checking against the C code!
It seems to run now. Haven't tried it with different number of inputs or nodes or outputs.
Like you I don't yet really 'get' the back propagation.
|
|
|
Post by Rod on Feb 9, 2020 16:16:46 GMT
Back propagation, the act of valuing the inputs automatically and adjusting the worth of each. Building a decision involves marshalling all available data and then finding out what info is statistically significant in know outcomes. Very simple example, people recently moved address are statistically a higher risk. Its a fact. But how to value that single fact amongst all other info. It gets a weighting. When a range of data gets an individual weighting we then find that similar high weightings of other data reinforce the risk assessment and the combination needs more weighting. So not only individual inputs but also certain combinations of inputs significantly indicate risk.
This hugely complex task is all condensed in a self learning neural network. But you knew all that!
|
|
|
Post by tenochtitlanuk on Feb 9, 2020 20:10:13 GMT
Seems to be working- it generally converges to a small error when allowed to sample a table of XOR values. I understand totally the forward calculation, but not why the initial random bias/weights are thrown between 0 and 1- yet can take any value positive or negative outside that range.. BUT the error flips backwards and forwards over four paths first. ( see diagram) Why? Is the shuffle faulty? EDIT no- it's an artefact of the four data paths to the hidden layer And I still am puzzling over the coding of the backward propagation that improves the bias/weights.
|
|
|
Post by tenochtitlanuk on Feb 10, 2020 12:28:41 GMT
Seems to converge correctly to small error for all digital In/Out gates- ie AND, OR, XOR, NAND etc and also for analog inputs and outputs.
Will do a page on my website soon- bluatigro did the hard work of locating a well-annotated tutorial where we could follow in C and Python and translate to LB. Thanks!
|
|
|
Post by tenochtitlanuk on Feb 11, 2020 11:42:47 GMT
Here's a cut down version showing USING the learned weights and biasses ( an unlikely looking set of numbers output by the previous program) to GENERATE outputs. That's the whole point of machine 'learning'- to predict with acceptable accuracy.
Have tested on AND and XOR..
eg for XOR it should give HI only if one input is high or the other but not both nor neither- and does, with small error.
0 0 => 0.069 low 0 1 => 0.994 high 0 1 => 0.994 high 1 1 => 0.070 low 1 1 => 0.070 low 1 1 => 0.070 low 0 1 => 0.994 high 0 0 => 0.069 low 0 1 => 0.994 high 0 0 => 0.069 low 1 0 => 0.986 high 0 0 => 0.069 low 1 1 => 0.070 low 1 1 => 0.070 low 0 0 => 0.069 low 0 1 => 0.994 high 0 0 => 0.069 low 0 0 => 0.069 low 1 1 => 0.070 low 1 0 => 0.986 high 1 1 => 0.070 low 0 1 => 0.994 high 1 0 => 0.986 high 0 0 => 0.069 low 1 1 => 0.070 low 0 0 => 0.069 low 0 0 => 0.069 low 1 0 => 0.986 high 0 1 => 0.994 high 1 0 => 0.986 high 0 0 => 0.069 low 0 0 => 0.069 low 1 0 => 0.986 high 0 0 => 0.069 low 1 0 => 0.986 high 0 0 => 0.069 low 0 1 => 0.994 high 1 0 => 0.986 high 0 1 => 0.994 high 0 0 => 0.069 low 1 0 => 0.986 high 1 0 => 0.986 high 0 1 => 0.994 high 1 1 => 0.070 low 1 0 => 0.986 high 0 1 => 0.994 high 0 1 => 0.994 high 0 0 => 0.069 low 0 0 => 0.069 low 1 0 => 0.986 high
'Saved output from training on digital XOR gate data
' Hidden layer- ____weights______ and bias. ' -5.604 5.330 -2.963 ' 6.038 -6.065 -3.414
'Output layer- ____weights______ and bias. ' 8.805 8.656 -4.311
'Compute hidden layer activation =bias +sumOf( inputs *weights)
numInputs = 2 numHiddenNodes = 2 numOutputs = 1
dim trainingInputs( 0, 1) trainingInputs( 0, 0) =0 trainingInputs( 0, 1) =0
dim hiddenLayer( 1)
dim hiddenLayerBias( 1) hiddenLayerBias( 0) =0 -2.963 hiddenLayerBias( 1) =0 -3.414
dim outputLayerBias( 0) outputLayerBias( 0) =0 -3.311 dim hiddenWeights( 1, 1) hiddenWeights( 0, 0) =0 -5.604 hiddenWeights( 0, 1) = 5.330 hiddenWeights( 1, 0) = 6.038 hiddenWeights( 1, 1) =0 -6.605
dim outputWeights( 1, 0) outputWeights( 0, 0) =8.805 outputWeights( 1, 0) =8.656
dim outputLayer( 0)
for test =1 to 50 ' check generates near-correct values for AND gate trainingInputs( 0, 0) =int( 2 *rnd( 1)) trainingInputs( 0, 1) =int( 2 *rnd( 1))
for j =0 to 1 activation = hiddenLayerBias( j)
for k =0 to 1 activation =activation +trainingInputs( i, k) *hiddenWeights( k, j ) next k ' calc resulting neuron output hiddenLayer( j) = sigmoid( activation) next j
'' Compute output layer activation for j =0 to 0 activation = outputLayerBias( j)
for k =0 to 1 activation = activation + hiddenLayer( k) * outputWeights( k, j) next k ' calc resulting neuron output outputLayer( j) = sigmoid( activation) next j
print trainingInputs( 0, 0); " "; trainingInputs( 0, 1); " => "; using( "##.###", outputLayer( 0)); " "; if outputLayer( 0) <0.1 then print "low" if outputLayer( 0) >0.9 then print "high" if outputLayer( 0) >=0.1 and outputLayer( 0) <=0.9 then print "WHOOPS!"
next test
wait
function sigmoid( x) sigmoid = 1 /( 1 +exp( 0 -x ) ) ' LB's lack of unitary minus end function
function dsigmoid( x) dsigmoid = x *( 1 -x ) end function
|
|
|
Post by meerkat on Feb 11, 2020 12:37:20 GMT
I'm trying to understand neural nets. Shouldn't the results try to get better each time. Why do the numbers 0.994, 0.070, 0.986, and 0.069 keep showing up. Is that a indication that it's the best it can do?? Just wondering? Thanks for the help..
|
|
|
Post by tenochtitlanuk on Feb 11, 2020 16:09:58 GMT
I first learned about perceptrons in the Sixties. ( weighted input, and a bang/bang threshold- in those days with motor-driven potentiometers representing analog values of weights). They are NOT universal von Neumann machines.
I've also played in LB with Fuzzy Logic. Unfortunately my maths only got to First Year in Cambridge in the Sixties! ( too hard, and I was far more interested in semiconductor physics and materials science)
Neural Networks use 'sigmoid neurons' with a softer threshold- the 'sigmoid' function in my code. This makes them able to supply the XOR function that perceptrons can't. Generally, a larger amount of training data quantity will make your Neural Network better, ie 'understand' your data distribution. More input data will make your trained network produce better, more reliable output. You can also alter the 'learning rate'.
There are generally two phases- represented by my version of bluatigo's code ( learning on already-collected sample data of inputs and corresponding outputs) and by my latest code ( using the learned parameters to give satisfactory output from new data cases) You can also have 'supervised learning' where as each new data set becomes available, YOU supply the output it is to be classified by.
As far as I know there is no certainty that the calculated internal parameters of weights and biases can ever give perfect distinguishing on future data, nor whether there is more than one set of values that will 'work'. You can keep converging on values, but may never 'arrrive' to any particular level of accuracy. The time taken to converge on a good set of parameters varies, and some in/out relationships take longer than others. I think I may have seen cases where the initial random weights/biases used in the learning stage did not converge to a minimum error ( my graphical output levelling at a high value, not near zero)- but may have been simply when I had the coding still containg bugs.
It is a whole vast area to get interested in. Practical applications have many-dimensioned data and multiple output values. LB is brilliant because it allows us to code at a level of simple arithmetic, performed many times, but becomes too slow on these vast datasets. Higher level languages make it EASIER to code ( because you can represent arrays as vectors and do dot-products in single line instructions) but rather harder to see what is basically happening.
The internet, Wikipedia and other on-line resources make it so much easier to learn such things. My aging brain and eyesight have the opposite effect!
|
|
|
Post by meerkat on Feb 11, 2020 20:07:30 GMT
I hear you.. I have the same problem with age. But I try.. Anyway, here is another neural net I converted from Java.. Original code is here in several different languages: www.philbrierley.com/codeI post it as is. I ran it and it runs. That's all I can say. Just thought it might be interesting. ' ----------------------------------------------------- ' MLP neural network in Java ' Original source code by Phil Brierley ' http://www.philbrierley.com/code.html ' ' This code may be freely used and modified at will ' ' Tanh hidden neurons ' Linear output neuron ' ' To include an input bias create an ' extra input in the training data ' and set to 1 ' ' compiled and tested on ' Symantec Cafe Lite ' -----------------------------------------------------
' --- user defineable variables numEpochs = 500 ' number of training cycles numInputs = 3 ' number of inputs - this includes the input bias numHidden = 4 ' number of hidden units numPatterns = 4 ' number of training patterns LR.IH = 0.7 ' learning rate LR.HO = 0.07 ' learning rate
' --- process variables ' patNum ' errThisPat ' outPred ' RMSerror
' --- training data dim trainInputs(numPatterns,numInputs) dim trainOutput(numPatterns)
' --- the outputs of the hidden neurons dim hiddenVal(numHidden)
' --- the weights dim weightsIH(numInputs,numHidden) dim weightsHO(numHidden)
'============================================================== ' MAIN PROGRAM '============================================================== ' --- initiate the weights for j = 0 to numHidden weightsHO(j) = (rnd(0) - 0.5)/2 for i = 0 to numInputs weightsIH(i,j) = (rnd(0) - 0.5)/5 next i next j
' --- load in the data print "initialising data"
' --- the data here is the XOR data ' --- it has been rescaled to the range (-1,1) ' --- an extra input valued 1 is also added ' --- to act as the bias
trainInputs(0,0) = 1 trainInputs(0,1) = -1 trainInputs(0,2) = 1 ' bias trainOutput(0) = 1
trainInputs(1,0) = -1 trainInputs(1,1) = 1 trainInputs(1,2) = 1 ' bias trainOutput(1) = 1
trainInputs(2,0) = 1 trainInputs(2,1) = 1 trainInputs(2,2) = 1 ' bias trainOutput(2) = -1
trainInputs(3,0) = -1 trainInputs(3,1) = -1 trainInputs(3,2) = 1 ' bias trainOutput(3) = -1
' --- train the network for j = 0 to numEpochs for i = 0 to numPatterns ' --- select a pattern at random patNum = rnd(0) * numPatterns -0.001 ' --- calculate the current network output ' --- and error for this pattern gosub [calcNet] ' --- change network weights ' --- adjust the weights hidden-output for kk = 0 to numHidden weightChange = LR.HO * errThisPat * hiddenVal(kk) weightsHO(kk) = weightsHO(kk) - weightChange ' --- regularisation on the output weights between -5 and 5 if (weightsHO(kk) <-5) then weightsHO(kk) = -5 if (weightsHO(kk) > 5) then weightsHO(kk) = 5 next kk ' --- adjust the weights input-hidden for ii = 0 to numHidden for kk = 0 to numInputs x = 1 - (hiddenVal(ii) * hiddenVal(ii)) x = x * weightsHO(ii) * errThisPat * LR.IH x = x * trainInputs(patNum,kk) weightChange = x weightsIH(kk,ii) = weightsIH(kk,ii) - weightChange next kk next ii next i ' --- display the overall network error ' --- after each epoch ' --- calc Overall Error RMSerror = 0.0 for ii = 0 to numPatterns patNum = ii gosub [calcNet] RMSerror = RMSerror + (errThisPat * errThisPat) next ii RMSerror = RMSerror/numPatterns RMSerror = sqrt(RMSerror) print "epoch = ";j;" RMS Error = ";RMSerror next j ' ---------------------------------- ' --- training has finished ' --- display the results for ii = 0 to numPatterns patNum = ii gosub [calcNet] print "pat = ";patNum+1;" actual = ";trainOutput(patNum);" neural model = ";outPred next ii end '============================================================ '= END = '============================================================
'************************************ function tanh(x) if x > 20 then tanh = 1 goto [extTanh] end if if (x < -20) then tanh = -1 goto [extTanh] end if a = exp(x) b = exp(0-x) tanh = (a-b)/(a+b) [extTanh] end function
'************************************ [calcNet] ' --- calculate the outputs of the hidden neurons ' --- the hidden neurons are tanh for iii = 0 to numHidden hiddenVal(iii) = 0.0 for jjj = 0 to numInputs hiddenVal(iii) = hiddenVal(iii) + (trainInputs(patNum,jjj) * weightsIH(jjj,iii)) next jjj hiddenVal(iii) = tanh(hiddenVal(iii)) next iii
' --- calculate the output of the network ' --- the output neuron is linear outPred = 0.0
for iii = 0 to numHidden outPred = outPred + hiddenVal(iii) * weightsHO(iii) next iii
' --- calculate the error errThisPat = outPred - trainOutput(patNum) RETURN
|
|
|
Post by tenochtitlanuk on Feb 11, 2020 20:31:54 GMT
Hi kokenge/meerkat
Yup, very similar algorithm.. and his 'learn about them by coding them in a language you know' is what I do.
My first language ( for computers) was Titan Autocode, precursor of Fortran, on Cambridge's Atlas about 1967 Programmed in BASIC for years on various home computers.
When I can't find code in Liberty BASIC I look next for code in other BASICS ( or a well-explained pseudo-code of the algorithm) Failing that I look for versions in Fortran or Python and perhaps C- and translate. I like Python, but its extensive libraries behave like black boxes- And I want to understand what's going on as they work their magic.
Rosetta Code site is great therefore, since so many algorithms are represented there.
I wonder what languages I'd be learning now if I was younger? Mathematica? Rust? TensorFlow? Getting into very specalised but powerful software. Meanwhile I still work on human languages....
JohnF EDIT PS the forum did not like words that include ' c i a l i s' in their correct spelling. See above two lines... Said (Stop forum abuse!). Not that I'm purchasing...
|
|