Hacker Newsnew | past | comments | ask | show | jobs | submit | Machow's commentslogin

For what it's worth, in Psychology (by APA standards) 1st person voice is discouraged, but so is passive voice. In the passages you use, removing 1st person (almost) requires using passive voice, but I would say removing 1st person voice is what discourages a very informal, prosey style, while the frequent use of passive voice often feels indirect.


> 1st person voice is discouraged

"Here we make use of measurable selection."

So, use of the first person, plural is quite widely accepted in nearly all of current and recent mathematics.

I wrote my Ph.D. dissertation this way, and one professor said "When you say 'we' maybe I don't agree?" and thankfully a fellow student spoke right up and defended me; she explained that using "we" was standard in mathematics.

Once I was trying to socialize with a high school English teacher and sent her a draft of a paper I was about to publish in some applied mathematics and asked her to give the paper a critical reading. Soon she asked me if using "we" was standard in mathematics, and I had to say yes. She gave me no more feedback! Gee, that's much better than what I got from English teachers in high school and college!

In the end, I first learned to write in college and by writing proofs in pure mathematics; the reason I was able to soak up the lessons was that such writing, as English, is so darned simple. Later I branched out from such simplistic writing.

Later I was trying to socialize with a woman who was a secretary in a university. She confessed that, in her experience typing, etc., the really clear writing was from the professors of mathematics and the physical sciences. Maybe she was just trying to butter me up!


I use "we" all the time...but I try to make it mean "me and the reader", i.e. "we then look at ....". In that sense, a paper should be a conversation between you and the reader even though the reader is passive in the interaction.


I'm struggling to figure out how you would write lab instructions in neither 1st person or passive voice. Would you just credit all actions to an anonymous experimenter, described in third person?


That's fair. For psychology methods it's easier to avoid both, at times, since you can write "participants viewed...", etc.. For other types of methods, especially where the object becomes implicitly understood, or would be redundant to state, I can see where passive voice might be useful. For instructions, you can just leave off the implicit "you should", EG "put X in Y". The tradeoff between passive and first person is important, and which is appropriate likely depends on the circumstance. They're both discouraged, but not banned, I'd imagine because poor writing often uses one or the other too often.


I think the algorithm he is suggesting is..

1. Get empirical probability vote is within 2 million from previous N elections. (so, 1 if it is within 2 mill, and 0 of not, then average). Use it as p(vote is within 2 million). 2. Assume that if a vote is within 2 million, then the exact number of votes is uniform between 0 and 2 million. Then, the probability of a tie, given that the vote is within 2 million is p(vote within w million) / 2 million.


3D barcharts have a longer history than I thought!


"given this measurement and our prior beliefs, the probability of page A being better than page B is X%"

FTFY ;). I think Bayesian methods add a lot of interpretive power, but I'm not sure that it would help people make a correct interpretation. I suspect that if practitioners are neglecting the difference between a one-sided and two-sided test, they will likely forget (or gloss over) what priors are (and their non-trivial implementation).

I definitely agree that their is a disconnect between the math and its interpretation, though.


In an A/B test where you usually get so much data, priors honestly don't matter much. Just use a flat prior. You'll overestimate the uncertainty a bit, so you may need a couple more data points than necessary but it's still way less than you'd need for a frequentist method. An A/B testing company could even automatically come up with better priors based on A/B tests that their customers have done in the past.


I really like the broad coverage that articles like this give to the psychology of learning, but I wonder how important some of these principles are in a practical sense. Sure, the effect of studying and retrieving in the same context is a classic effect, but how strong is it?

It's like reading an article on how to make money that doesn't even give you a ballpark estimate of how much money each strategy could make you.


> doesn't even give you a ballpark estimate of how much money each strategy could make you

Public strategies have short lives in efficient zero-sum markets. Here is a site with resources and active discussion on mnemonics, memory palaces and other practical memory improvement techniques: http://artofmemory.com/


Sorry if this sounds condescending. I'm asking out of total naivety. What is the background of people who typically run A/B tests? Are they often unfamiliar with things like multiple comparisons, stopping problems, and the like?


I can answer from personal experience. I'm a dev at a tiny startup. I took statistics at college 20 years ago but haven't used it since the final exam, and remember little other than sample size is critical. I've figured out through recent experience that you need to let a test run a while - results seem to flip flop for the first few thousand visitors. I use tools like VWO because it makes it simple to run a simple one variable test, and I'm happy with the results (though ignorance is bliss, so maybe others would not be so content).


No condescension perceived :-)

I think that background varies - some people are fluent statisticians and others come from different disciplines / careers. Anecdotally, my experience from browsing A/B testing advice online is similar; some content is very statistically savvy and some is a bit oversimplified.


This seems to be exactly the case in psychology. Undergraduate students often assist with experiments in return for research experience on their resumes. Depending on the degree to which they were involved in a project, they may be thanked in a paper, but not listed as a coauthor.


Research is significantly different from private, for-profit business.

And those studies are always IRB-approved. The IRB takes into account the conflict of interest, and will sometimes protest if you don't pay the student subjects (e.g. if the study involves a significant time commitment).


I'm not talking about students who participate in experiments as subjects. The students who are research assistants are sometimes unpaid, and might not receive course credit for RA work, but contribute code to projects for research experience. It's relevant to working for a private, for-profit business in the sense that the alternative is often doing an internship for a private company.

The IRB only monitors study participants, not research assistant work.


Ah, I misinterpreted your comment. Sorry.

> It's relevant to working for a private, for-profit business in the sense that the alternative is often doing an internship for a private company.

I still think there's a pretty significant ethical difference between not-for-profit research and a for-profit business. Even without the student-teacher relationship, unpaid internships in CS are pretty ethically dubious and uncommon.


Agreed. That title made wonder what kind of crazy set of working memory training experiments took place. And I don't even know how they would test things in the other direction...


I was surprised by that, too. I work in a lab that studies intelligence, and IQ tests are generally designed to measure G (general cognitive ability). It's like saying, "we're interested in measuring height, and I'm not talking about inches".

I'd imagine he's saying that they want to assess general cognitive ability in ways traditional tests do not.


Easy. Think of Sheldon from Big Bang Theory. He may have a high IQ, but he lacks the ability to be able to connect different ideas and/or come up with solutions which require more than just processing mathematical computations.


I don't think its a good idea to let your world view be influenced by fictional T.V. characters.


What kind of fictional characters should I allow to influence my world view? Those from books written in the 20th century? Shakespeare? The Illiad? Any advice appreciated.


I'm not, he's just an example which I thought best sums up what I am trying to describe...


Can you give an example of someone in the real world who has "a high IQ, but lacks the ability to be able to connect different ideas and/or come up with solutions which require more than just processing mathematical computations"? IQ is not a measure of arithmetic processing capability.


Probably not one you know well...


Setting aside the psychometrics discussion, it's illegal in the US to hire based upon IQ tests.




P&G uses an iq-like test from what I've read. The NFL uses the Wonderlic iq test.


"Legally difficult" would have been more accurate.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: