Query

instruction = 'ํŒŒ์ด์ฌ merge sort ์ฝ”๋“œ๋ฅผ ์ œ์‹œํ•ด์ค˜'

messages = [

    {
      "role": "system",
      "content": "๋‹น์‹ ์€ ํ›Œ๋ฅญํ•œ AI ๋น„์„œ์ž…๋‹ˆ๋‹ค. ์งง์€ ๋‹ต๋ณ€์„ ์ œ์‹œํ•˜๊ณ , ๋‹ค์Œ์œผ๋กœ ์ƒ์„ธ ์„ค๋ช…์„ ํ•ด์ฃผ์„ธ์š”. You are a great AI assistant. Give a short answer, then elaborate. "
    },
    {
      "role": "user",
      "content": instruction
    }, 
]

prompt_message = tokenizer.apply_chat_template(
        messages, 
        tokenize=False, 
        add_generation_prompt=True,
)

eos_token_id = [tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>")]

outputs = llm.generate(prompt_message, SamplingParams(stop_token_ids=eos_token_id, temperature=0.8, top_p=0.95,max_tokens=512))

for output in outputs:
    propt = output.prompt
    generated_text = output.outputs[0].text
    print(generated_text)

Result

๋‹ค์Œ์€ ํŒŒ์ด์ฌ์œผ๋กœ ๊ตฌํ˜„๋œ ๋จธ์ง€ ์†ŒํŠธ ์ฝ”๋“œ์ž…๋‹ˆ๋‹ค.

```python
def merge_sort(arr):
    if len(arr) <= 1:
        return arr

    mid = len(arr) // 2
    left = arr[:mid]
    right = arr[mid:]

    return merge(merge_sort(left), merge_sort(right))

def merge(left, right):
    result = []
    i = 0
    j = 0

    while i < len(left) and j < len(right):
        if left[i] < right[j]:
            result.append(left[i])
            i += 1
        else:
            result.append(right[j])
            j += 1
    result += left[i:]
    result += right[j:]

    return result

arr = [5, 2, 8, 12, 3]
result = merge_sort(arr)
print(result)
```

์ด ์ฝ”๋“œ๋Š” ์žฌ๊ท€์ ์œผ๋กœ ๋ฆฌ์ŠคํŠธ๋ฅผ ๋ฐ˜์œผ๋กœ ์ž๋ฅธ ๋’ค, ๊ฐ๊ฐ์˜ ๋ฐ˜์„ ๋‹ค์‹œ ์žฌ๊ท€์ ์œผ๋กœ ์ž๋ฅด๊ณ , ์ตœ์ข…์ ์œผ๋กœ ๋‘ ๊ฐœ์˜ ํ•˜์œ„ ๋ฆฌ์ŠคํŠธ๊ฐ€ ๋ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ `merge()` ํ•จ์ˆ˜๋ฅผ ์ด์šฉํ•ด ์ด ๋‘ ๊ฐœ์˜ ํ•˜์œ„ ๋ฆฌ์ŠคํŠธ๋ฅผ ์ •๋ ฌํ•˜์—ฌ ํ•ฉ์นฉ๋‹ˆ๋‹ค. ์ตœ์ข…์ ์œผ๋กœ `result` ๋ฆฌ์ŠคํŠธ์— ์ •๋ ฌ๋œ ๋ฐฐ์—ด์ด ์ €์žฅ๋ฉ๋‹ˆ๋‹ค.

Downloads last month
32
Safetensors
Model size
71B params
Tensor type
I32
ยท
F16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support