Wednesday 2 April 2014

Slog #9 (Week 11) Sorting and Efficiency

During weeks 10 and 11, we explored various types of sorting algorithms, and their efficiency. The 2 sorting algorithms we focused on were quick sort and merge sort. Pretty cool names for sort, huh? Although I do like the name of bubble sort alot better. Because bubbles. So how do they work?

Quick sort is an interesting sort that implements the idea of a pivot. So the way the algorithm works is we set an element from the list to be our pivot (what's pretty neat is that in some implementations of this sorting algorithm, the pivot is chosen randomly, others have a formula to pick one. I think it's cool that people can approach this algorithm in various ways). Now what we do, is we reorder the list so that all elements in the list less than the pivot go before it, and all the elements greater than or equal to the list go after it. We do this recursively with the left side of the pivot and the right side of the pivot until there the list is sorted. It's a pretty wicked sorting algorithm in my opinion. So where does efficiency come into play? Actually, let's take the time to talk about it for a bit.

When it comes to efficiency, programmers often focus on the worst case. Wait, what? Worst? I thought we were aiming to be the very best, that no one ever was? Nope. We're aiming for the worst. To become the best. Get it? Haha. People are concerned about the worst case because it is important to know how much time one requires in the worst case to finish an algorithm. If we know the worst case for an algorithm, then we can work with it appropriately. One notation we use to describe run time is the Big O' notation. We say things like this algorithm has a runtime of O(n^2). What does this structure mean? The function within O is the runtime in proportion to the size of the data, n. So O(n^2) means that the runtime is square the size of the data.

Back to quick sort, what is it's worst case run time? O(n^2). We explored this idea in class; the pivot would become the first item in the list, and the list was already sorted. This would only split the pivot from the rest of the list, and not into 3 separate items (left, pivot, and right list). So that would mean the function had to run through n items, then n-1 items, then n-2 items. Adding that up, we get the familiar algorithm of 1/2(n-1)(n), which becomes 1/2 (n^2 - n), and when you take away the constants, you get n^2. Hence, O(n^2) runtime.

The other algorithm was merge sort. Now this is some fancy stuff. Divide and conquer style sorting, heh. The list split into 2 parts, and it recursively does that until the list size is 1, then the 2 parts that were split are sorted. It does this recursively, sorting at the smallest list size level, and working its way up. I think it's another cool sorting algorithm, that's pretty damn clever. So what about the efficiency of merge sort, more importantly its worst case run time? Well, splitting the list in half, doubles the size of the sorted sections, (log n), and then it has to pass through n amount of items (length of the data set), so the big O is O(n log n).

This is my last slog, so thank you all for taking a peek. See you around!

Saturday 1 March 2014

Slog #6 (Week 7/8) A Linked List To The Past

During weeks 7 and 8 we explored the ideas of linked lists. Simply put, a linked list is an abstract data type that consists of 2 parts, a head, and the tail. Now the head contains an object, usually a str or an int. Now here comes the cool part. The tail, consists of another linked list (well, the last linked list's tail is None).

Sunday 23 February 2014

Slog #5 (Week 7) Recursion? Did you mean... Recursion?

Recursion? Recursion. Recursion is really cool. No, seriously. It's like inception. It's probably the best way to describe it. Within a function, we call the function itself. And we do so until we reach the base case of a function. Mind you, every time we call the function, the input has to be different in some form, usually shorter or smaller than the previous input. Recursion has been fairly easy, but with the introduction of binary trees and linked lists, suddenly it has become super hectic. If tracing it wasn't enough, you have things like linked lists which is recursive itself (the tail is the head of another linked list), so you have to deal with like double recursion. Double inception. Woah.

To be more in-depth about recursion, the idea is that we take a problem, and we work at it continuously with the same method, but on a smaller scale each time, until the problem becomes so small, that it is easily dealt with. Now when we work backwards, these small solutions add up back into a big solution. We work with the problem recursively. At times it may be difficult to visually see how it works, but once you get in the mindset of it, it becomes very easy and useful.

Saturday 15 February 2014

Slog #4 (Week 6) Stumped With Trees

I'll admit, I'm stumped. Like a tree. Heh, get it? I'm not actually stumped, I only said so to use lame puns. So what's the deal with trees? For starters, they aren't like real trees. This week we dealth with binary trees. Think of a blob. And then it grows 2 limbs with blobs at the ends. And those 2 blobs grow 2 more limbs with blobs at the end, and so on. A binary tree is something like that. I really suck at explaining. Simply put, it's another abstract data type. In this case, you have an object that has 0, 1 or 2 children. The max number of children it can have is called the arity factor. Binary trees have an arity/branching factor of 2. No surprise there. But the cool part is that the children themselves are binary trees. It was fairly easy to understand the concept, and it was fun during the labs to explore how to work with the binary trees themselves. The idea of a binary tree's children being binary trees brings back the idea of recursion, and during the labs it helped me refine my ability at being able to do recursive functions more efficiently.

This week is also the week we have to hand in assignment 1. I've spent countless hours reading it, and from what I've managed to code, I think everything is okay. I think the trick is to really read it until it hurts, then read some more. And then hopefully by then it makes sense. If future me is reading this, I hope you ace this project.

EDIT: Woo, we aced it. Hell yea.

Sunday 9 February 2014

Slog #3 (Week 5) Why I Don't Like Cheese

I don't like cheese. And when assignment 1 has to do with cheese, my hatred for it grows even more. I do like cheese on pizza. And Kraft Dinner. And lasagna.

Okay, seriously though. I've stared at some sections, and I don't get it.

EDIT: I still dont get it

EDIT 2: I GET IT.

EDIT 2.5: On a serious note, today we spent more time refining our ability to code recursively. I think that recursion requires a certain mindset, and that eventually over time one will adapt and be able to use it at ease. One of the difficulties I find with recursion is finding the base case and tracing the recursive program overall. I see recursion as taking a problem and making it smaller, until we are able to tackle the small problem with ease. But tracing our way to the small problem can require so many steps that one can get lost in it. An example is when I tried to use the M(n) (least number of steps) function, I had a difficult time trying to implement it because I wasn't thinking with the proper recursive mindset. But eventually, it clicked. Fellow coders, good luck on assignment 1.

EDIT 3: Oh ho ho you clever *******s.

EDIT 4: In class today, Danny went over the towers of hanoi. One of his explanations helped me get over a ditch I've been stuck in for a few hours now. 

Saturday 1 February 2014

Slog #2 (Week 4) - Recursion and Lists Comprehensions

Today I learned recursion and lists comprehensions. Okay, enough with the boring formal talk. Heh. Lists comprehensions are pretty sick. I've never actually dealt with it before, so seeing it in action today was pretty cool. Being able to create a list automatically in one line is pretty damn neat.

This week we focused on the theme of recursion, and with a sprinkle of lists comprehension. We worked through some problems such as nested lists, and it really required a certain mindset that I haven't adjusted to yet. Also, doing these problems I learnt about using if and else statements in one line. I thought it was pretty cool to be able to do such a thing. Execute this if this this lines happens to be true, and otherwise this if the line is false. This could make for some pretty short and efficient code, which I am a fan of. Long gone are the days of if and else blocks!

Thursday 23 January 2014

SLOG #1 (Week 3): OOPs.

We've only gotten back into the semester, yet we're already finished a quarter of the course. Oops, guess I've seem to have lost track of time. Ah, OOP. Object-oriented programming. What is it? Simply put, it's a method of computer programming that treats certain pieces of codes as objects. In Python's case, these objects are instances of classes, and we give these objects properties/attributes and methods (functions). So instead of looking and programming and codes as logical procedures and steps, we can treat it in a more visual form; we can treat code as if they were objects. We are able to modify and interact with the instance/object we have created. In my opinion, object-oriented programming is very unique in that it really allows a person to visualize and understand the interactions between pieces of code.

During these lectures, we've managed to create a few objects. One of these objects we created was a Stack ADT (abstract data type). This stack object was a list that we were able to interact with. We could push item into the to the top of the stack, and we could pop an item from the top of the stack. Normally, one would think of coding stack logically with procedures and step. Create this list, and create functions to do things to it (i.e. push and pop). But with object-oriented programming, we are able to instead think: hey, let's visualize a stack. Think of a stack of rocks, and how we are able to only add (push) things to the top of the stack, and remove (pop) things off the top of the stack. 

(Before I move on, this reminds me of one of the exercises that I actually really liked during the lectures. We were given a paragraph and told to keep track of the most important noun, verb and adjective. With this we were able to create a class, having the main noun as the class name, the verbs as the methods, and adjectives as attributes. Just putting it out there, I thought it was a unique and interesting approach to building a class, and in a way it seemed like it could be applied to a real life situation.)

So, back to the stack of rocks. With the visualization of the stack of rocks, we can really understand what this stack adt is supposed to be like, and how it can work. That's what I really like about object-oriented programming. I'm a visual person, so I understand things better when I can visualize the situation, and object-oriented programming is perfect when it comes to visualizing code and seeing how it interacts with other things.

TL;DR: Python is an object-oriented programming language. I think object-oriented programming is great in that it helps visualize code, and it makes code more clearer. 

Oh, also more lights should be turned on. I'm not a morning person, and the dim lighting makes it hard to stay awake at times when the lecture slows down. Just sayin'.