Eudaimonia 42



Current LLMs are Smart Tools, Nothing More

I thought we had progressed beyond using sensational claims about LLMs consciousness for publicity. However, Claude 3 has reignited this behavior, prompting me to shift the focus of this blog post to address this issue.

Please avoid wasting intellectual effort on crafting simple dialogues to suggest that models are conscious, or debating whether Model X can respond to questions about consciousness, and similar endeavors. These discussions clutter media and social platforms with fear and disinformation without offering any real benefits to science and society.

Is it realistic to think that in the last ten years, we've evolved from simple MLPs to a conscious machine through mere scaling? On the other hand, biological evolution took billions of years to develop human intelligence iterating through many species. Can machine learning from predominantly English internet text truly understand the world like humans, who have sensory experiences and the ability to move through space and time? Is the current supercomputer a good way to match the capabilities of the highly specialized human brain?

Please avoid emulating physicist William Thomson, who, in 1897, claimed that there was nothing left to discover. Such assertions could undermine your credibility in the future. Before you post about ML consciousness again, consider the following questions:

  • - How would a human respond if asked to produce content in an unfamiliar language? Unlike a current LLM, which would generate hallucinated content without hesitation, a human would likely be too embarrassed to attempt an answer.
  • - Does a human start to get to know you from scratch every time you meet them?
  • - How can something be conscious (by definition aware of internal and external) if it isn't aware of talking to you in the other chat dialogue?
  • - Does a human require a supercomputer to learn?
  • - Is your cat's spatial and temporal reasoning better than the LLM's?
  • - Is your LLM capable of crafting a clever joke involving activities in the physical world?

We're in an era of significant advancements in machine learning, and it's important to study and improve upon the failures and limitations of current models. But before we can even start talking about human-level intelligence and consciousness, we need an ML system that possesses memory, reasoning, and the ability to interact with and actively learn from diverse, multimodal information that we currently can't collect in appropriate amounts and types. Nowadays models are useful for simple tasks, but they fall short for more complex and structured inquiries. Unfortunately, our benchmarks are not very complex either, and many of them are created to test things that are hard for humans and not for machines, which can give a false sense of great achievements.

In conclusion, we are currently pioneering a distinct form of intelligence that complements our own and may never achieve consciousness. But even if it does there are many hard steps to take before we come close to that point.



[AAAI 2024] Notes

The AAAI 2024 conference, attended by 6500 people, has once again shown to be a significant event in the AI research community, showcasing research advancements and fostering invaluable networking opportunities. Among the myriad of sessions, talks, and meetups, several highlights stood out, reflecting the dynamic and rapidly evolving landscape of artificial intelligence. Below is an overview of sessions I found the most valuable.

Valuable Talks


  • - AAAI Award for AI for the Benefit of Humanity: Milind Tambe's presentation, "ML+Optimization: Driving Social Impact in Public Health and Conservation," underscored the potential of machine learning and optimization techniques to address critical challenges in public health and environmental conservation. This talk not only showcased innovative applications of AI but also emphasized the technology's role in societal betterment.
  • - 2024 Robert S. Engelmore Memorial Lecture Award: Raquel Urtasun's lecture, "Accelerating AVs with the Next Generation of Generative AI," provided insightful perspectives on the future of autonomous vehicles (AVs) and the transformative impact of generative AI technologies in this space. Her vision for the integration of AI in AV development promises to redefine mobility and safety on our roads.
  • - AAAI/IAAI Invited Talk: "Objective-Driven AI: Towards Machines that can Learn, Reason, and Plan," by Yann LeCun explored the frontiers of AI, advocating for a future where machines possess the ability to learn from their environment, reason out solutions, and plan actions, thereby achieving a higher level of intelligence and utility.

Notable Papers


The conference showcased a plethora of innovative research papers, with topics ranging from therapeutic peptide generation to machine unlearning. Highlights include:
  • - A Multi-Modal Contrastive Diffusion Model for Therapeutic Peptide Generation: This paper presents a novel approach to generating therapeutic peptides, demonstrating the potential of AI in accelerating drug discovery.
  • - Outlier Ranking for Large-Scale Public Health Data: An insightful study on leveraging AI for public health, focusing on identifying outliers in large datasets to inform better health policies and interventions.
  • - MIND: Multi-Task Incremental Network Distillation: Introduces a framework for incremental learning in neural networks, paving the way for more adaptable and efficient AI systems.
  • - ExpeL: LLM Agents Are Experiential Learners: demonstrates how LLMs can learn from interactions within simulated environments, improving their ability to understand and generate human-like responses.
  • - An RNA Foundation Model Enables Discovery of Disease Mechanisms and Candidate Therapeutics: by leveraging vast amounts of RNA data, the model can predict disease associations and suggest molecules for drug development with unprecedented accuracy.
  • - BrainLM: A Foundation Model for Brain Activity Recordings: BrainLM can decode neural signals into meaningful patterns, facilitating breakthroughs in understanding cognitive processes and neurological disorders.
  • - Beyond Attention: Breaking the Limits of Transformer Context Length with Recurrent Memory: the proposed approach significantly extends the model's ability to retain and utilize information over longer sequences without compromising computational efficiency.
  • - Fast Machine Unlearning Without Retraining Through Selective Synaptic Dampening: In an era where privacy concerns and data regulations are paramount, this paper presents a novel method for "machine unlearning" that enables AI systems to forget specific data quickly and effectively without full retraining.

Engaging Meetups


Networking and community engagement were central to the AAAI 2024 experience, with meetups covering a wide range of topics. I was the most intriguted by next meetings:
  • - AI for Proteins
  • - AI for Drug Discovery
  • - Machine Unlearning
  • - Uncertainty in Transformers and LLMs
  • - ML for Health
  • - Biomedical NLP


My Journey with Mind Mapping in Notion

Embarking on a quest for a more streamlined and productive lifestyle, I ventured into the world of Notion a few years back. This blog post unfolds my intimate journey with mapping my thoughts and organizing my digital existence within Notion, revealing how it has evolved into a cornerstone of my daily regimen.

From the outset, I was drawn to the platform's customizable workspace, which offered a unified repository for calendars, to-do lists, notes, plans, journals, and more. The ease of accessing everything online and the simplicity of sharing documents and information were compelling features. Yet, the very flexibility that attracted me initially also led to my early withdrawal. The ambition to consolidate my entire life into a single, well-organized space proved daunting. Structuring my life into a database and deciding what to include was overwhelming. Moreover, transitioning from my established routine of using Google Docs, calendar, and phone notes—despite its chaos—wasn't straightforward.

However, in 2023, with my PhD completed and newfound free time, I felt compelled to revisit Notion and fully integrate my workspace into it. Inspired by others who had personalized their organizational systems, I explored various methods. While none fit perfectly, they guided me in tailoring Notion to suit my needs. Ultimately, I settled on creating four distinct databases.

The first database organizes tasks into:

  • - Immediate tasks, which I schedule into my calendar weekly.
  • - Short-term goals, smaller objectives I aim to accomplish monthly, reviewed biannually.
  • - Long-term goals, larger endeavors that span years.
  • - Bucket-list items, dreams to fulfill when time allows.

The second and third databases catalog my personal and research interests, respectively. The "Save to Notion" browser extension has been a game-changer, streamlining the curation of these databases. My research database includes research papers, blogs, books, articles, teams, and talks of interest. My personal database encompasses movies, TV shows, YouTube videos, music, books, self-development tools, TED talks, recipes, and articles.

Different views for each category mitigate Notion's loading issues which occurs when a view has more than a couple of hundred of items. With status tags like "Ready to check," "In progress," "Finished," and "Archive" I am able to manage content. "Archive" is reserved for items worth remembering but not immediately relevant. Additionally, a “Favorite" tag highlights particularly impactful content.

Another challenge with Notion was maintaining its currency. Initially, I dumped everything into it. Now, I curate content with lasting value and plan to periodically prune my collections.

The fourth database is a compendium of topics dear to me, such as plant care, personal finance, and health practices, replacing my need for Google Docs.

After six months with this refined structure, I've noticed significant improvements. While I occasionally use other tools, the disarray of managing information across various apps is a thing of the past. The "Save to Notion" extension continues to be invaluable. For instance, using Notion has transformed my cooking with an easily navigable recipe collection, enabling me to explore and track international cuisines effortlessly.

Moreover, as someone who prioritizes data security, Notion's capability to back up all databases into CSVs provides peace of mind.

In closing, while not perfect, Notion has fundamentally changed the way I manage information, tackle my to-do list, and enjoy my hobbies.



Recent TED Talks That Inspired Me (part 2)


[ICML 2020] Notes
Note: This post is very subjective (based on my interests and knowledge) and not a full overview of ICML 2020

Tips from older researchers:

  • - Working on sets instead of lists: Predicting Choice with Set-Dependent Aggregation, attention (Yoshua)
  • - Work on problem driven research, instead of algorithm/model driven research
  • - What is the big problem you want to solve -> split into smaller problems -> split into tasks: choose a small part of task that is realistic and that may potentially advance the field and do that
  • - It's important to start and finish things.
  • - If something turns out to be a bad idea, wrap it up, write a report on it at least for yourself
  • - Instead of focusing on advancing your career, you should focus on help your students to be the best they can be. If your students are advancing, you are advancing.
  • - Plant a lot of "seeds of collaborations", not many of these will grow to be trees but the few that do can end up being very fruitful
  • - Read papers only if you can understand what they are doing from the abstract and if abstract/introduction are novel enough.
  • - If you have problem with mentees, schedule a meeting with mentee and ask them for self-evaluation. If they are aware of their issues, make plan for them to improve. If they don't do so, tell them it's not a good fit and they should try somewhere else.

Papers of interest:

  • - Continuous domain adaptation, can be applied to medical and NLP domains
  • - Contrastive Multi-View Representation Learning on Graphs: in theory it should be applicable to knowledge graphs as well because we are looking at local regions by subsampling the graph. However, you'll need to replace the encoder with a relational GNN such as R-GCN.
  • - Self-supervision and MTL in GCN
  • - Time-Consistent Self-Supervision for Semi-Supervised Learning
  • - Topological Autoencoders
  • - Negative Sampling in Semi-Supervised learning
  • - Safe Deep Semi-Supervised Learning for Unseen-Class Unlabeled Data
  • - Deep Molecular Programming: A Natural Implementation of Binary-Weight ReLU Neural Networks
  • - Adaptive Adversarial Multi-task Representation Learning
  • - Which Tasks Should Be Learned Together in Multi-task Learning?
  • - Predicting Choice with Set-Dependent Aggregation
  • - Robust learning with the Hilbert-Schmidt independence criterion
  • - MetaFun: Meta-Learning with Iterative Functional Updates
  • - PowerNorm: Rethinking Batch Normalization in Transformers
  • - Differentiable Product Quantization for Learning Compact Embedding Layers
  • - Few-shot Relation Extraction via Bayesian Meta-learning on Task Graphs
  • - Calibration, Entropy Rates, and Memory in Language Models
  • - Train Big, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers
  • - Abstraction Mechanisms Predict Generalization in Deep Neural Networks
  • - Decoding the genome with AI
  • - Pillars and future of ML (introducing very cool new works - LLL workshop)
  • - System 2 priors (LLL workshop)
  • - continual learning from an learning perspective (LLL workshop)
  • - Learning on the Job in the Open World (LLL workshop)
  • - Rigging the Lottery: Making All Tickets Winners; improvement on images (better accuracy, 8 times smaller size), but not on RNNs
  • - VAE with Riemannian Brownian motion priors
  • - Perceptual Generative AE
  • - Perceptual Generative AE
  • - Latent Bernoulli AE
  • - Bio-inspired Hashing for Unsupervised Similarity Search
  • - Learning Autoencoders with Relational Regularization
  • - Learning De-biased Representations with Biased Representations; a good practical way to solve problems from Leon Buttou paper
  • - Instead of warmup with transformer, use different initialization to get better results

Interesting people

  • - Kevin Yang: currently in Microsoft, works on protein representation
  • - Michaela van der Schaar - doing AutoML of traditional models on different medical problems
  • - Kaveh Hassani: Autodesk Toronto
  • - Jeff Clune: Research Team Lead at OpenAI, Associate Professor at University of British Columbia


[ACL 2020] Notes
Note: This post is very subjective (based on my interests and knowledge) and not a full overview of ACL 2020

Interesting people (selection made based on their research or interactions during the event):

  • - Colin Cherry, Google Montreal
  • - Jasmin Bastings, Google Berlin
  • - Sebastian Gehrmann, Google Boston
  • - Ramakanth Kavuluru (BioNLP)
  • - Inkit Padhi, IBM (unsupervised text style transfer)
  • - Graeme Hirst, UoT (distinguished service award)
  • - Lucie Flek (social media analysis)
  • - Josh Tenenbaum (MIT - language parsing and grounding for robot)

Interesting papers (text emphasis the main idea of the paper):

Mentoring session: Long-term career planning + Becoming a research leader: building your professional identity

  • - Follow your interests and what you are good at, despite the trends
  • - Do something that you’d own and you’d be proud of
  • - Make sure that it’s doable within your time-frame
  • - Going into uncharted territory might give you bigger impact: maybe there is a low-hanging fruit there
  • - Include your own personality
  • - Add some non-research (or at least non-self promotion) related posts to tweet, so you get more attention
  • - Make sure to have aside project for fun/break


[ICLR 2020] Tips for Prospective and Early-Stage PhD Students

Contributors (sorted by first name): Akari Asai, Carlos Miranda, Chiara Mugnai, Claas Voelcker, Divyansh Kaushik, Fairoza Amira Binti Hamzah, Jade Abbott, Jaydeep Borkar, Kalpesh Krishna, Karmanya Aggarwal, Makbule Gulcin Ozsoy, Marija Stanojevic, Martha White, Michael McCabe, Moritz Schneider, Rajarshi Das, Sabrina J. Mielke, Sagar Devkate, Tornike Tsereteli

Note: Text is US-biased, but many answers are internationally applicable. It was originally posted here.

How to get started in ML/DL?

Useful tips/materials for learning ML related programming and basic deep learning knowledge:
  • - Books: "Hands On Machine Learning with Scikit Learn Keras and TensorFlow", "Deep Learning" and "Mathematics for Machine Learning"
  • - Watching fast.ai courses (learn fast.ai library and pytorch). To learn about optimization, Stephen Boyd's course on Convex Optimization.
  • - Reading published code helps to see what the actual contribution of the paper was and you can map that back to the mathematical explanation in the paper. Then, the next time you see that mathematical explanation, you know exactly what it means computationally.
  • - For programming skills in DL, you can follow Francois Chollet (@fchollet) on twitter (he usually posts some of the best tutorials on TF+Keras) or deep dive in github repos (e.g. TF object detection API, HuggingFace transformers...), you will find lots of best practices! Just study them, understand why they are done as they are and practice.
  • - Reimplement papers on your own.
  • - Attend ML schools, such as JSALT or those mentioned here.
  • - It would take time to go through those materials, but don't give up. If possible set aside two-three weeks to go through a course / book in detail without distractions. Otherwise, set aside a specific time of the day when you'll do this.
  • - Auditing or watching youtube videos of ML/DL courses from some universities is also great way to learn (e.g., Deep Learning for NLP, Deep learning for Computer Vision)

How to keep track of novel research?

  • - Once a month, check arxiv-sanity.com for the last month, "top recent", "top hype" and "recommended" tabs. Scan papers titles and abstracts and based on that download papers for further reading (~15).
  • - Read fully just the most interesting of those (~5) and get the most important ideas of the others.
  • - Listen to the talks from ~ 5 high quality workshops/conferences per year, within the area of interest to gain new ideas. You can search for those on youtube (Institute for Advanced Studies channel, for example), slideslive.com, videolectures.net. If possible, try to attend some of them live.
  • - Attend highly relevant/quality talks in your area to get to know relevant people.
  • - Subscribe to relevant blogs' weekly/monthly updates. Pick blogs based on their quality and my interests. Many companies or big ML research groups have blogs. Examples of blogs/newsletters to follow: on ML theory, also ML theory (no new posts), probability, broad and beginner friendly, on ML theory, on NLP, on deep learning, broad machine learning with industry-related updates, and BAIR research. If there is no subscription to newsletters, you can subscribe to RSS feeds or follow the authors on twitter.
  • - Talk to colleagues and ask them what they currently think is important.
  • - Following some twitter bot posting the new Arxiv papers daily might be helpful -- you don't need to read all of them, but if the titles look relevant to your current project or sounds interesting, just click the link to the Arxiv page and skim the abstract.
  • - Keep a detailed multi-doc journal of ideas and research you've read and use a paper organizing software. Zotero/Mendeley/roam-research/Google docs/Microsoft OneNote Folders/Notion/Trello are some of the software you can look into. Notion has a bit of a learning curve which makes it non ideal but it's the best thing I've found that syncs to multiple machines. Trello allows you to set up a board with a list of categories (e.g. research topics, resources), under which cards represent subcategories (e.g. tools, datasets, tutorials). You can also create a GitHub repo with a readme with different subsections of the research topics I'm interested in (can put papers, blog-posts, libraries and tools related to the research, other resources, etc under the topics).

How to apply to PhD?

    - Check out blogs: Machine Learning PhD applications, How to Pick Your Grad School and Student Perspectives on Applying to NLP PhD programs
  • - Find/meet/contact a potential advisor before the application period. Read papers in areas you are interested in and look into websites of those researchers. If they don't have open positions announced, contact them by email (except if it's otherwise mentioned on their website). Conferences are a good way to meet potential advisors.
  • - Try contacting advisors as early as you've made up your mind about grad school / research direction.
  • - Good letters of recommendations from someone already in the field is the best way to be noticed during applications. Recommendations coming from academia are very important, while those coming from industry are rarely taken into account, unless the recommender has a good reputation in the research community and has Ph.D. in the field. Recommendations from senior PhD students or even postdocs are not very valuable.
  • - You should consider applying for a PhD if you think you have a good research profile (e.g., having worked on research projects during undergrad, did research internships / attended research intensive summer programs like JHU's JSALT). If you don't have research experience and want to gain some before applying, applying to master's might be more appropriate. It may happen that professors expect more from people who finished masters.
  • - In recent years PhD admission in CS/ML got more competitive in general, so people with previous research experience (and published papers) seem to have better chances.
  • - Other ways of getting research experience are: applying for a semester of undergraduate research assistant, fellowship or course. Working as a technician / software engineer within a research lab.
  • - Residency programs offered by company research (see list) are also great opportunities, although the application process is known to be competitive.
  • - How do you think Covid-19 will affect next year's application rates and acceptances? It's still unknown, and depends on the institution, money flow, and federal imigration regulations.

How to find and pick a PhD advisor?

  • - How to find a PhD advisor? Find a topic that you are interested in researching. Look into papers in that area and who authored them (which groups are active in that field). Out of those, select groups within universities/locations in which you can imagine living for the next five years. Look into websites of those groups, their departments or twitter accounts of the professors as they might have publicly available calls for PhD students. If not, try to contact professors or students from those groups to ask if there are available positions (attach your resume).
  • - Try to meet or get to know potential advisors as much as possible before applying to Phd. Contact his/her students / postdocs with questions you have.
  • - It's important that your advisor's working style fits you well.
  • - Questions to ask prospective advisor: How much time he/she spends with students? What's his/her working style? What kind of and how much work has her/him published in the last few years? How stable would your financing be? How many other faculty are in the department who work on your area of interest? What is their advising style? What traits do they prioritise? What do they expect from their PhD student? How much non-research work would you need to do (lab maintenance, meetings,...)?
  • - If you are interested in doing internships, ask: Are his/her current students doing an internship and at what places?

Going to graduate studies (getting into research) after working in industry

  • - What will change: you'll get less money, but more time and freedom.
  • - Many of the current Ph.D. students are satisfied with the current payments.
  • - You'll probably need to move. Think if that is feasible in your situation.
  • - Should you be moving from industry to academia if they are interested in a specific research topic? Depends on the topic and your background. If you want to transfer from physics into ML research and you know how to code, there is a way to do it without going to academia, but you need to choose an appropriate employer. In many other cases that's not possible. Try applying for a role that you aspire to. If you get invited to interviews and can do pretty well in them, you are good. Otherwise, you'll see what's missing and how much more work you need (if you need to go back to school).
  • - When to give up industry for academia? If you see yourself working on a research topic for the next five+ years and you extremely value time and freedom to pursue your research aspirations, academia is probably a good choice. If you want to get into ML because it's popular and you heard it's cool, you should start by listening to a few online courses in your free time and start building some small stuff. Then, you can re-evaluate your aspirations.

Things to keep in mind in order to make the most of your PhD

  • - Are you solely responsible for generating new research ideas? Or do advisors play a significant role in it? That's going to depend a lot on your advisor, the size of your research group, and how well your research interests line up with your advisor's goals. It's almost a given that your advisor will have some project in mind that they think you'd be a good fit for when they accept you. But if that doesn't line up with your career goals, then you might find yourself needing to be significantly more independent (though your advisor should still be a good resource to help refine your ideas). Talk about this with your potential advisor during the application process - that's really the only way you'll know what their expectations are. I also think it depends on the advisors' advising style. Talking to his/her students to ask the advising style would help.
  • - What is one advice you wish you had gotten before starting your PhD? Don't forget to take care of yourself, including enough sleep, basic exercise, good quality food and asking for mental health counselling, if needed. Expect to have good and bad periods. Procrastination is part of most PhDs. Find ways that help you shorten the procrastination periods (exercise, short trips, gathering with friends, more relaxed/tight schedule...)
  • - What is the role that the groups you are in (excluding the prof) have played in your learning? If people work on a related topic, you can get help from them or actively discuss with them to brainstorm. If people are working on different topics but are open, you can still brainstorm with them and get some inspiration from different fields.
  • - What is the balance between doing everything yourself vs taking help and support from your group? It's good to be in a collaborative group, where people help each other (especially before the deadline). Too competitive environment is not really good for mental health. On the other hand, doing everything by yourself is good to learn to be more independent.
  • - Is it common to change a PhD program after spending a year or two? It is not common, but it happens for different reasons. Give your best to pick the suitable university, program, advisor and research topic, so you don't need to change.
  • - What is the procedure to transfer to another program? Many schools don't have this option, so you might need to start a PhD program from the beginning. If you want to transfer, try doing it as soon as possible. Some schools will allow credit transfer for courses you've taken. When it comes to applying to a new school, the process is the same as for the freshmen PhD. Therefore, try looking first into transfer options within the same university.
  • - What are tips to organize, optimize and make one's research project faster? How can I shave off time by better preparing? I learnt the hard way that I didn't need to write a lot of code since it was available online as a functionality in a well established library like scikit-learn etc or someone had a good github repo with the same. So checking online can save some time. Also writing modular code that can be reused, as a library for yourself can be very helpful. A lot of non-CS students do not use Git as often, I think they are missing out on a lot. Documenting your experimental setup, results and thoughts about future experiments/ideas can be very helpful, specially when working in a team.
  • - What's the best part about conferences? How do you make the most of them? Tips and tricks to share? Don't go to talks, but chat with everyone at posters. The most important track is the hallway track. You will miss out on some things and that's okay. Sneak out and enjoy the city. but go to posters and ask questions. Hang out with cool people (like us) and talk to them.
  • - How useful/important is it to have Academic Twitter? As many good researchers are active on twitter, having it would help you in many ways. You will get many novel ideas/events/papers information earlier. Also, you will grow your network and promote your work easier. That being said, try not to become addicted to it. Also, they often advertise their work soon after they post it on Arxiv, so you will not overlook it if you follow them on Twitter.
  • - Put attention into writing and presentation skills as it would influence your research path. A useful writing course is available on coursera. Also, look into this article and this talk.
  • - How to manage time and what to prioritize? Look into Devi Parikh's blog for advice.

Finding industry position after ML finishing master/PhD

  • - If you want to go to industry, internships are very important, so try doing at least one.
  • - Leetcode helps a lot, but your GitHub portfolio helps more than that in some cases. Industry doesn't look at your publications a lot (except for research scientist positions). Instead, they will rather ask how to turn your publications into productions.
  • - In Silicon Valey and similar ecumens, more often than not, the initial rounds of interviews are organized by software engineers, so leetcode problems play a major role.
  • - Open source contributions are valuable.
See also Grad Resources blog with many other useful details.


TED Videos That Inspired Me (part 1)

I am in love with science and technology and TED talks on those topics are especially inspiring. This is my selection of the best TED talks (those I'd like to listen to again). They also include topics, such as education, self-improvement and more. Have in mind that videos are watched and selected in a year when they are published. Some of the older once might not be as relevant anymore.

Data Science, Machine Learning and Artificial Intelligence

Technology

Science

Education

Self-improvement

Social sciences, history, economy, politics and activism



Working Heart Tissue Made from Spinach
There are few websites and platforms that I follow closely on social networks as they constantly inform about cool inventions or good practices around the world. Those platforms are:
    • - World Economic Forum
    • - Futurism
    • - SpaceX
    • - Less Plastic (facebook group)
    • - NASA Space Center in Houston
    • - Nature (research)
    • - Nature News and Comment (facebook group) - they have great Spotlight videos on top of their page featuring amazing things
    • - Blog of Bill Gates (https://www.gatesnotes.com/)
    • - Svet nauke (in Serbian)
The most interesting recent article I'd like to share is on how scientist succeeded to make a beating heart from spinach leaves. Check the video.



How to Continue Improving After School?
Constant education is inevitable in 21st century. Even students right out of college need to continually learn:
  • - We forget a lot of stuff we learn
  • - World changes constantly: there are new discoveries, technologies, and even new jobs
  • - Four years wasn't enough to learn everything in your field
If you believed that after graduation you don't have to read/learn anything new ever, you'll end up disappointed or jobless. So what should you do? Find things that interest you and find a way to learn them and to be the best you can at them. I've tried many learning approaches till now and here is my list of preferences in decreasing order, although it is the best to have them all:
  • - Find a mentor, someone who achieved stuff that you want to achieve and who is willing to help you on that way. LinkedIn or university Alumni list and meetings can be good place to look out for those people. There are also NGO mentorship projects that are trying to match mentor and mentee in some parts of the world.
  • - Find friends who are having similar goals (not necessarily in the same field) and try to discuss and discover stuff together with them. In my opinion, it is better to have friends interested in different fields, with different learning approaches and views about the world. That will help you develop multidisciplinary skills that are highly valued and learn from different perspectives about the world.
  • - Find massive open online courses (MOOCs) that can help you learn more about the topics. I find MOOCs good for beginners (in some cases intermediates) but if you really want to excel in something, you still need to read books/papers and work on your own projects. If you know who are leading people in your field, you can go to their websites to check for additional materials or book suggestions.
List of open courses platforms
  • - Udacity is the best one in my opinion since all courses are free, mostly self-paced and designed in cooperations with companies. However, it only supports computer science topics.
  • - Coursera has many courses on different topics, usually created by universities. It is possible to audit most of the courses for free.
  • - edX is similar to coursera, but it might have some courses not offered on coursera
  • - Udemy covers a wide range of courses, most of which you have to pay. Still you can find also a lot of free once and there are few discounts per year when all the courses are really cheap. Choose courses carefully, not all of them are of good quality (most of them have reviews to guide you)
  • - Alyson has a lot of courses and covers a lot of different areas. Most courses are free, but I don't know what is their quality, since most of the topics offered there didn't interest me.
  • - Lynda is Linkedin's learning system which you pay per time you use (at some universities and companies you can get an account for free). It also covers many topics under certain fields.
  • - Knowledge city: I find courses on this platform to be basic and should be only used for those that don't have a degree in that field. Courses are free.
  • - OpenEdu
  • - OpenCulture
  • - BoingBoing
  • - Aziksa
  • - Future learn
Open course initiatives from some of the best universities:
  • Carnegie-Mellon University
  • Berkley
  • MIT
  • Standford
  • Harvard
  • Yale
How to pick the best online course? Know the topic you want to advance in and google for experts in that topic. Most probably one of them has offered an online course. However, also make sure that course level is appropriate to your knowledge. If you are a beginner, some courses, especially taught at the top universities can be overwhelming.

Why this blog?

Purpose of this blog is to save memories, describe my trips, discuss interesting people and processes that caught my attention. I'll share posts regarding science, technology, education and traveling which are my favorite topics. I'll recommend some world innovations, books or music. Finally, I'll try to describe some of my research. All of the posts represent my personal opinion or experience. They will be chosen based on my preferences. It's on readers to critically read them and form their own view on the information presented here.

How the blog was named?

Eudaimonia comes from Greek and can be translated as happiness, prosperity or welfare. Number 42 comes from the book "The Hitchhiker's Guide to the Galaxy" as the answer to the "Ultimate Question of Life". On page 42 of "Harry Potter and the Philosopher's Stone", Harry discovers that he is a wizard. ASCII code for 42 is an asterisk, which in computer science means anything you want.