For an example. The potential hypothesis here are pre generated, but you can imagine an algorithm or adapt an existing one with a tight generalise/specialise loop.
But the scasp finds the two potential rules that cover both positive examples but not the negative example.
>> For an example. The potential hypothesis here are pre generated, but you can imagine an algorithm or adapt an existing one with a tight generalise/specialise loop.
Yes! I'm thinking of how to adapt Louise (https://github.com/stassa/louise) to do that. The fact that s(CASP) is basically a Prolog-y version of ASP (with constraints) could make it a very natural sort of modification. Or, of course, there's always Well-Founded Semantics (https://www.swi-prolog.org/pldoc/man?section=WFS).
There was an earlier system called Thelma (https://github.com/stassa/thelma), an acronym for "Theory Learning Machine". Then I created a new system and, well, I couldn't resist a bad pun. They're a bit of a tradition in ILP.
One difference is that in Machine Learning you must think of data structures and algorithms. i.e. the practical ways to compute a model. How to represent and transform data while building a model. I think this is given less emphasis in statistics. Standard models are often used and theory is built around these different models. For example aspects such as power calculations for a regression model.
One of the interesting things about this take on Prolog vs the Power of Prolog (https://www.metalevel.at/prolog), is that it attempts non-monotonic reasoning. There is still a lot of value in the ideas of abductive and inductive logic programming that have not been fully exploited in the current machine learning trends.
The thing is findall/3 is not really in the standard pure part of Prolog. It has the form: findall(Template,Enumerator,Instances), and can be read as: `Instances is the sequence of instances of Template which correspond to proofs of Enumerator in the order which they are found by the Prolog system" (Craft of prolog)
This reading is meta logical because it depends on the instantiation of variables in Template and Enumerator and on the proof strategy of the Prolog system.
So for example if you used a meta interpreter to change the search strategy of Prolog from topdown to bottom up, then the result of a call to findall would change.
setof/3 does not have this problem because it finds the ordered set and can fail (in contrast to findall which will return an empty list)
You need to bear in mind we as programmers are concerned with answers to a query but Prolog returns proofs...
I now try and avoid findall/3 in my programs and only use it when I am doing input and output from my core program.
Mendelian randomization is a good technique to start thinking about causality for epidemiological studies.
This is a good paper that demonstrates the approach: https://www.nature.com/articles/srep16645
Millard, Louise AC, et al. "MR-PheWAS: hypothesis prioritization among potential causal effects of body mass index on many outcomes, using Mendelian randomization." Scientific reports 5 (2015): 16645.
Moving toward a watch/headphones combo would be an attractive phone alternative to me. I would need to really sit down and weigh out the benefits/loses compared to keeping a phone vs going whole hog and disconnecting completely though.
I like the notion of 'context' for transfer learning.
Where context can be parameterized.
The idea is that you learn a general model from your available data and you are able to specialise that model to perform well when adapted to different contexts.
A simple example is when the different contexts are different cost matrices or different expected ratios of positives and negatives.
So instead of learning a classification model from your training data, you learn a ranking model. You can make and adapt the different classification models (thresholds on the ranking) depending on the context of where that model will be deployed.
So for example you learn a ranking model from pictures that ranks women above men. When you want a classifier that classifies pictures as men or women, you chose the threshold from your ranking model depending on the confusion matrices costs for the context of where the model is being deployed.
I think a cool research theme is to think of similar tools for other aspects of transfer learning.
See https://swish.swi-prolog.org/p/non-monotonic_ilp.swinb
For an example. The potential hypothesis here are pre generated, but you can imagine an algorithm or adapt an existing one with a tight generalise/specialise loop.
But the scasp finds the two potential rules that cover both positive examples but not the negative example.
i.e.
flies(X,h8):-not penguin(X).
and
flies(X,h17):-bird(X),not penguin(X).
Which is cool.