Reviewing what I studied, how this work will be explained as well.
Bucket Sort is a sorting algorithm that is similar in concept to Radix Sort and Counting Sort, but it is designed to handle floating-point numbers. Like these other algorithms, Bucket Sort uses placeholders to sort elements efficiently. However, unlike Radix Sort, which sorts integers digit by digit, Bucket Sort distributes floating-point numbers into buckets and then sorts each bucket individually.
Time Complexity
The time complexity of Bucket Sort can vary depending on the distribution of the input data. In the best case, where the numbers are uniformly distributed, the time complexity can be O(N + K), where N is the number of elements and K is the number of buckets. However, if the numbers are not well-distributed and most elements end up in a single bucket, the time complexity can degrade to O(N^2) due to the sorting algorithm used within each bucket.
Steps to Implement Bucket Sort
Divide Data into Buckets: If there are N data points, divide them into K buckets. Ideally, K should be close to N for optimal performance.
Distribute Elements into Buckets: Place each element into its corresponding bucket based on its value.
Sort Each Bucket: Use a sorting algorithm like Insertion Sort to sort the elements within each bucket.
Merge Sorted Buckets: Combine the sorted elements from all buckets to produce the final sorted array.
Implementation
Here’s an example implementation of Bucket Sort for floating-point numbers. We’ll consider a simple case with 10 buckets and the input array { 0.78f, 0.17f, 0.39f, 0.26f, 0.72f, 0.94f, 0.21f, 0.12f, 0.23f, 0.67f }.
// Insertion SortvoidInsertionSort(vector<float>&bucket){for(inti=1;i<bucket.size();++i){floatkey=bucket[i];intj=i-1;while(j>=0&&bucket[j]>key){bucket[j+1]=bucket[j];j--;}bucket[j+1]=key;}}voidBucketSort(vector<float>&arr,intnum_buckets){vector<vector<float>>buckets(num_buckets);// Put Bucketfor(auto&a:arr){intindex=int(a*num_buckets);buckets[index].push_back(a);}for(inti=0;i<buckets.size();i++){InsertionSort(buckets[i]);}// update arr from sorted bucketintindex=0;for(inti=0;i<buckets.size();i++){for(intj=0;j<buckets[i].size();j++){arr[index++]=buckets[i][j];}}}
The time complexity for this case is O(N^2), the space complexity is O(1) is because we’re traversing vector twice with the size of N. The vector itself doesn’t take any space, so its O(1). Normally, in this case. We can make it by O(n) for time complexity and space complexity is O(n) by using hash map.
classSolution{public:vector<int>twoSum(vector<int>&nums,inttarget){unordered_map<int,int>hash;intn=nums.size();for(inti=0;i<n;i++){intx=target-nums[i];if(hash.find(x)!=hash.end()){return{hash[x],i};// order doesn't matter}hash[nums[i]]=i;}return{};}};
if we switch some logic, which is int x = target - nums[i]; to int x = nums[i] - target;, then we can get the index of the two numbers that add up to the target. By using the dictionary, we can get the index of the two numbers that add up to the target. If we find the number, we can return the index of the two numbers. If we don’t find the number, we can add the number to the dictionary.
Reviewing what I studied, how this work will be explained as well.
Counting Sort is a sorting algorithm that works by counting the number of objects that have each distinct key value, and using arithmetic to determine the positions of each key value in the output sequence. It’s like creating a histogram of the data by counting the number of occurences of each key value.
How it works
Create a count array to store the count of each unique number (the size would be the highest integer + 1)
Count the occurences of each element in the input array and store it in the count array.
Add the previous count to the current count to get the position of the element in the output array.
Create the ouptut array to store the sorted elements.
In Reverse order of output array, place the input array’s element in the output[count[input[i]] - 1] index, and update the count of the element in the count array. (which is subtracting 1 from the count)
Reviewing what I studied, how this work will be explained as well.
Quick Sort is an efficient, comparison-based sorting algorithm that employs the divide and conquer strategy. Unlike Merge Sort which divides arrays into equal halves, Quick Sort creates unbalanced partitions based on a pivot element. This characteristic gives Quick Sort its name - it’s particularly “quick” for many real-world scenarios.
The Divide and Conquer Strategy in Quick Sort
Divide: Select a pivot element and partition the array into two sub-arrays - elements less than the pivot and elements greater than the pivot.
Conquer: Recursively sort the sub-arrays.
Combine: Since the partitions are already sorted in place, no explicit combination step is needed. Let’s actually see how this works.
Partitioning
Partitioning is a crucial step in the Quick Sort algorithm where elements are rearranged around a pivot value. There are two main partitioning schemes: Lomuto and Hoare.
Lomuto Partition Scheme The Lomuto partition scheme typically selects the last element as the pivot. It uses a one-directional scanning technique that maintains three distinct partitions: elements less than the pivot, elements greater than the pivot, and the unknown section.
pair<int,int>lomutoPartition(intdata[],intlo,inthi){intpivot=data[hi];// alternatively you can select the pivot in the middle of the array.inti=lo-1;// int i = 0for(intj=lo;j<hi;j++){if(data[j]<=pivot)swap(data[++i],data[j]);// data[i++]}swap(data[++i],data[hi]);// data[i]returnmake_pair(i-1,i+1);}
Hoare Partition Scheme The Hoare partition scheme, developed by Sir Charles Antony Richard Hoare (the creator of Quick Sort), typically selects the first element as the pivot. It uses a two-directional scanning technique with two pointers moving in opposite directions.
The time complexity is based on the pivot value, because this will make array to be unbalanced. The worst case would be O(n^2), but the best case would be O(nlogn)
Reviewing what I studied, how this work will be explained as well.
Quick Selection Sort
Quick Selection, also known as Quickselect or Hoare’s Selection Algorithm, is an efficient algorithm designed to find the kth smallest element in an unordered list. Developed by Tony Hoare (the same computer scientist who created Quicksort), this algorithm shares the same partitioning strategy as Quicksort but with a crucial difference in its recursive approach.
Algorithm Overview
Quickselect uses a divide-and-conquer approach similar to Quicksort:
Choose a Pivot: Select an element from the array as the pivot
Partition: Rearrange the array so elements less than the pivot are on the left, and elements greater are on the right
Selective Recursion: Unlike Quicksort which recurses on both partitions, Quickselect only recurses on the partition that contains the kth element we’re looking for
This selective recursion is what gives Quickselect its efficiency advantage over a full sorting algorithm when we only need to find a specific element.
The time complexity of Quickselect varies depending on the scenario:
Best Case: O(n) - This occurs when each partition divides the array into roughly equal halves.
Average Case: O(n) - Even with random pivot selection, the algorithm performs linearly on average.
Worst Case: O(n²) - This happens when the partitioning is maximally unbalanced (e.g., when the array is already sorted and we choose the first or last element as pivot).
The linear average-case time complexity makes Quickselect significantly more efficient than sorting algorithms (which require at least O(n log n) time) when we only need to find a specific element.
Reviewing what I studied, how this work will be explained as well.
Selection Sort is a simple and intuitive comparison-based sorting algorithm. The core idea behind this algorithm is to divide the array into a sorted portion and an unsorted portion, then repeatedly find the smallest (or largest) element from the unsorted portion and place it at the end of the sorted portion.
Algorithm Steps
Selection Sort operates through the following steps:
Initialization: Conceptually divide the array into a sorted portion (initially empty) and an unsorted portion (initially the entire array).
Find Minimum: Find the smallest element in the unsorted portion.
Swap: Exchange the found minimum with the first element of the unsorted portion.
Boundary Shift: Move the boundary between sorted and unsorted portions one position to the right.
Repeat: Continue this process until the entire array is sorted.
The time complexity would be in for-loop (inner loop and outer loop) => O(n²) for all cases.
Kth Selection Sort
Problem Statement: The selection problem is defined as: Given an array of n elements and an integer k (1 ≤ k ≤ n), find the kth smallest (or largest) element in the array.
Reviewing what I studied, how this work will be explained as well.
This sorting algorithm is based on the devide and conquer strategy. The steps in simple array are as follows:
Divide the input array into two halves. (Divide)
Sort each half. (Conquer)
Merge the two halves. (Combine)
Basically, we’re breaking the problem of sorting the array into two smaller problems. (sub-problems), then we merge the two halves, which are sorted.
The problem state: Given some array, and sort it in increasing order. There are two solution can exist: bottom up (for loop), and top down (recursive). Let’s explore what are the code looks like, and the logic behind. If the size of the array or the vector is 0 or 1, it is already sorted. The sort doesn’t happen when it’s divided, it happens when it’s combining(merging).
Let’s look at the recursive way to solve this first (top-down merge sort)
voidMerge(vector<int>&a,intlo,intmid,inthi){aux.resize(a.size())inti=lo;j=mid+1;for(intk=lo;k<=hi,k++){aux[k]=a[k];}// you can do while in the similar way, which can be way easier than thisfor(intk=lo;k<=hi;k++){if(i>mid)a[k]=aux[j++];// left array is already sorted, now look at right arrayelseif(j>hi)a[k]=aux[i++];// right array is already sorted, now look at left arrayelseif(aux[i]<aux[j])a[k]=aux[i];// compare the value, get the min valueelsea[k]=aux[i++];// just copy the value}}voidSortHelper(vector<int>&a,intlo,inthi){if(hi<=lo)return;intmid=lo+(hi-lo)/2;SortHelper(a,lo,mid);SortHelper(a,mid+1,hi);Merge(a,lo,mid,hi);}intmain(){std::vector<int>vec={3,5,8,9,6,2};SortHelper(vec,0,vec.size()-1);}
this is definitive way to sort array or vector. If we have a n problem, we divided into n/2 and so on, then level is created is log(N), so this becomes O(n * log n)
Reviewing what I studied, how this work will be explained as well.
Given this array / vector that has {2, 8, 5, 3, 9, 4}. We want to sort this increasing order.
Work left to right
Examine each item and compare it to items on its left.
Insert the item in the corect position in the array.
We sure that first element is sorted out, that’s why we loop from 1 to vector size. Since we partition the first group (element), then we compare the next value and the far right of the partioned group. (ex: if i index = 2, then j = 2, compare 8 and 5, 5 is smaller than 8, so we swap them.) Also, another case would be if i index = 3, we compare 8, and 3, then swap. Then we compare 3 and 5, and swap. Basically we’re inserting every time if next value and the partioned array’s of last index by comparing. Basically, that’s what inserting happend!
#include<iostream>
#include<vector>voidPrint(conststd::vector<int>&vec){for(auto&v:vec){std::cout<<v<<" ";}std::cout<<std::endl;}intmain(){std::vector<int>vec={2,8,5,3,9,4};Print(vec);// Logic:// first element is always sorted.for(inti=1;i<vec.size();i++)for(intj=i;j>0&&vec[j-1]>vec[j];j--)std::swap(vec[j-1],vec[j]);Print(vec);}
But we can also simplify, and a bit optimize version by not using std::swap
We know that two for loop, which means it will compare the new value and the partitioned elements. So, it will be O(n^2). Best Case is O(n) when the array is already sorted. But Worst Case is O(n^2) when the array is sorted in decreasing order.
회사에서, UNDO / REDO 설계를 맡게 되었는데.. 여러의 source 들을 찾아보려고 하니 @abstractstaticmethod 라는 녀석을 보게 되었다. 과연 abstract 는 왜 사용 되는지, static method, class method, 그리고 instance method 를 알아보겠다.
일단 아래의 그림을 보면, 대충 추상클래스가 왜 필요한지 볼 수 있다.
정의는 이러하다 Weapon Class(추상 클래스)가 생성이 되면, 무기마다 다른 특성들을 가지고 있기 때문에 Gun Class 와 Sword Class 가 가지는 action method 들도 다를것이고, 역활? 도 나름 다를 것이다. 사실 무기 클래스는 attack() 이라는 method 를 가질수 있다, 하지만 내부에서 구현하는 방식이, 어떤 종류의 무기에 따라서 내용이 달라진다.
즉 여러 종류의 무기 class (e.g sword, gun, etc..) 들은 무기(Weapon)이라는 Class로 부터 상속을 받게 하므로, 세세한 구현은 상속받은 자식 클래스로부터 세부 구현해야한다. 이렇다고 한다면, 이론적으로 overriding 하고 똑같은 의미이지 않느냐? 라고 물어볼수 있다.
일단, 추상 클래스는 실제로 추상적인 또는 대체적인 그림의 틀을 가지고 있다고 생각하면 된다. 실체 클래스들은 자식들의 클래스이다. 즉 실체 클래스의 멤버(필드, 메소드)를 잘 잡아주는 틀이라고 생각하면 될것 같다.
즉 메소드 선언만 통일화 [링크 참조]를 할수 있게 만들며, 각각의 기능은 달라지는거라고 생각하면 편하다.또 Method 선언만 통일화 하고 각각의 기능이 달라져야 되는 케이스가 필요하기 때문에, 추상 메소드 오버라이딩이 필요하다.
위의 코드와 같이 weapon 이라는 클래스는 맞지만, 클래스가 아닌? 추상적인 클래스가 있고, 세부적인 무기들은 weapon 이라는 클래스를 상속 받는다. 파이썬에서 각각의 weapon 들을 instance 화 시키면, 결과 값은 다 다르다. 결국, 억지인 예일수 있지만, 달고나 만드는것 처럼, 달고나를 만드는 행위는 어떤 모양이든 다 같지만, 하지만 어떤한 모양으로 낼지는 다른거다 라고 말을 할수 있을것 같다.
ROS2 책을 사서, 보고 있는데… 사실상 ROS1 과 ROS2 의 차이를 어느정도 느끼고 있으면서도, 확실히 많은 것 같은 느낌이 든다.
Definition of DDS
DDS 의 정의란 이런다. DDS 는 데이터 분산 시스템의 줄임말로 OMG Object Management Group 에서 표준을 정하고자 만든 트레이드 마크이다. 형식상 어려운 말이기도 하다.. 뭔 Oh My God 도 아니고.. 규정을 하는 OMG 가 있다니.. 하지만 책에서 보기에는 이게 데이터 통신을 위한 Middleware 이다.
우선 정의는 이렇다 DDS 는 Data Distribution Servce, 의 약자이며, 데이터 분산 서비스의 약자이다.
OMG 에서 정의 한 DDS 란, 더욱더 명확한것 같다..
The Data Distribution Service(DDS) is a middleware protocol and API standard for data-centric connectivity from the OMG> it integrates the components of a system together, proviiding low-latency data connectivity, extreme reliability, and a scalable architecture that business and mission-critical IOT applications need.
In a distributed system, middleware is the sfotware layer that lies between the operating system and data. It simplifies the development of distributed systems by letting software developers focus on the specific purpose of therir applications rather than the mechanics of passing information between applications and systems.
이걸 보면, 대학원 수업시간에 distributed system 을 듣긴한것 같은데, 그래서 middleware 가 좀 더 친숙한 단어이긴하다.
책에서는 이렇게 말한다. “실제로는 데이터를 중심으로 결성을 갖는 미들웨어의 프로토콜(DDSI-RTPS) 와 간이 DDS 사양을 만족하는 middleware api 가 그 실체 이다.” 즉 APP 과 OS 를 중간에서 연결해주는 다리 역활은 하는데. 여기서는 API 들이 주로 programming language 와 ROS 에 Topics, data, types, filtering, cache 등 있고, protocol 에는 Session, reliability, QoS 등 있다. 즉 ROS 는 또 다른 OS 라고 생각하면, OS 와 우리가 사용하는 Application 사이에 통신을 이어주며, Programming Language 의 호환 등… 있는것 같다.
Feature of DDS
Industry Standards
OS Independent
Language Independent
Transport on UDP / IP
Data Centricity
Dynamic Discovery
Scalable Architecture
Interoperability
Quality of Service(QoS)
Security
여기서 잠깐? Middleware 란?
Resource (발췌)
ROS2 (Robot Operating System), ROS2 로 시작하는 로봇 프로그래밍 - 표윤석, 임태운 지음